 All right, well, first off, I want to say hello, everyone. Welcome to this month's Mother of All Demo Days meeting. In this iteration, we are thrilled to welcome folks joining us from our PL network companies, alongside Endress teams to share their latest progress and groundbreaking projects during this demo session. So just a quick run-through of who we have presenting today. We'll have fixture plate from Bedrock Tornado Teams, a short demo from Nummix Garden with Oliver, creating confidential smart contracts with FHE and FHEBM with Zama. DAG House will be presenting exploring content claims with GraphQL and learn more about building a web app on IPFS with Bio Proof Storage. Starting with that, first off, we have Rod Bag presenting on behalf of the Bedrock Tornado Teams, which is fixture plate explaining and generating RKLT DAG. Hello. I want to introduce a little project that was released today called Go Fixture Plate. It's in the IPLD org on GitHub. It's a tool that came out of some work we were doing on retrievals, trying to get test fixtures for UNIX FS pathing and just DAG pathing in general for cars and downloads, all that sort of stuff. We needed fixtures, but we also needed assurance that we were making sure we got the right blocks that we wanted out of a DAG for all the different forms of queries we were making. So I'm going to quickly show you how this works as a CLI. You can download the binary called fixture plate from GitHub, or you can go install it. So I've got a CID here that I'm going to fetch with Lassie. I'm going to fetch that off wherever it's coming from. And I know this CID points to a DAG that is a single file that takes up many blocks. So I'm going to now use fixture plate to explain that car for me and see what's in it, why it needed to have so many blocks. So here I can see my single file is sharded across many leaf blocks. And you can see which bytes take up which blocks. So this explains why that one CID resulted in all these blocks. You can do more interesting things like I'll get the Wikipedia CID. So Lassie fetched Wikipedia. I'm going to fetch the cat page from Wikipedia. So that contains a bunch of blocks too. So why did one page need so many blocks? fixture plate explain that file. And I'm going to also say ignore missing because there's a lot of missing blocks here because it's not all of Wikipedia. So now I can see that this one file cat was sharded across two different blocks. And it was part of a sharded directory. The wiki directory is really large. So it gets sharded at multiple levels. And this shows you how it navigates through that. So the CID I requested relates to the page that I got through all of these steps. And this is how we make a trustless car. And this explains how we navigate through the DAG to get the blocks we want. Now we can do more interesting things with fixture plate. We can generate some synthetic DAGs for using testing. And it's got a little DSL on the command line. I'm going to generate a directory that's got 10 files that are approximately 1k each, or that are 1k each. And it's going to tell me what it's doing, give me a car. I can explain that car. And it'll show me what it did. It made this directory for me. Now it can get much more interesting than that. So I can say let's say one file with one meg. But I want to make a subdirectory that is sharded. This is where it gets really interesting. That subdirectory is going to take approximately 20 files of approximately 10 bytes each. So let's make that. I'm going to explain that whole DAG. So this is the whole DAG that we made. This is the kind of thing you would see, maybe not with these names. But this is a random DAG that I might use for testing purposes. Now, how would I use it for testing? Well, let's say I want to get this file here, which is inside of a sharded directory. So I'm going to explain that car again. And I'll path it into that file. And it will show me which blocks would be needed for a trusses query from the root all the way down to that file. And I can even do things like byte ranges. So if I get this 1 meg file and I want bytes, let's say, we'll get this block to the end. So this is the kind of query you would do. I've now just got the certain byte range of that file. But if I did this query, if I pass that query onto the IPFS trustless gateway, I would download a car. And I should get back a car with these four blocks in it. Just for that entity bytes query there, we'd get that. So this is useful for that kind of testing. So now this is useful in our retrieval tools for testing. We've got it built into some of our integration tests. But I think it's actually a really good tool for understanding DAGs, particularly as we start talking about trustless cars. You can use this to actually explain to you what on Earth is inside the car and why you should trust it. And that's it. Up next, we have Memex with Oliver. So this may be outside of the engineering domain, like hard engineering. It's probably more of an adjacent or supportive tool for your work in particular for workflows where you need to read and research websites, papers, GitHub logs, whatever that you need to either do by yourself or with other people. And where things become really tricky doing that. So maybe some of you already have been running into these issues where, for example, you had, you wanted to save something, you need to copy paste links around, you need to copy paste text sections around, you have all these graveyards of documents all over the place with the information that was really important. And it becomes messier and messier in particular if you have to collaborate with other people. So it's already a problem if you by yourself, but it's going to get exponentially worse if you have to do any sort of like reading together papers, websites, et cetera. And so we built Memex to make that a bit easier. And yeah, I'm just going to start with the more personal organization parts. So how do you keep track of the things you read online and the next section will be, how do you do that collaboratively? So yeah, the most basic thing you need to do here with Memex is that if you want to save an article, you just press on save. And from that point on, it's full text searchable. So that means you can find it even if you didn't put any organization on top, you can find it by all the words inside the article. So I'm just copy this word right now and then you just press M space into the address bar and you can search for dialogues. It finds all the articles with dialogues. You see the first one is Aristotle, but you can also get to dashboard where you can see a full overview of all the articles you saved or apply more filters. For example, the timeframe, the domain it was on or maybe spaces you put in. And spaces are for us a bit like tags with a difference that you can also share them or collaboratively curate them. But yeah, you don't have to not organize. You can also do some organization with that. And soon also you're gonna have the ability to have nested spaces so you can create trees that are present your folder structures that you maybe have in your bookmarks or that you just wanna like organize things by project. The next thing that you can do with Memex is that you can also highlight an entity. So if you wanna mark up a piece of text, you just make a highlight here. You can also mark up a piece of text and add a comment to it. Hey, this is important. Like this works also with papers. So if we're going to an archive paper, for example, and also works by the way with papers that are also stored local. So if you wanna annotate a locally annotated PDF, you just drag it into a browser and do the same thing. I just did open the reader where you can start annotating in Memex like this. And then you just highlight a piece of text, same thing. And I'm actually right in this very moment and hopefully by tomorrow or later some Monday, you're gonna also be able to drag rectangles that create screenshots and then also anchor. So you can also annotate illustrations, et cetera, that are hard to capture in pure text. And the last content piece that you can annotate is YouTube videos. Oops, there you can either create timestamp notes. So you can create like sections, like timestamps with sections of the video. If you click on those timestamps, it actually will jump back and forth in the video to the places you want to annotate. The second thing is you can do smart notes which essentially summarizes for you the last X seconds of the video. So if you don't wanna type it up, what's in the video, you can just let it, like AI do the thing for you. You can also decide on how many seconds you wanna include in that summary. And you can also summarize the entire video, by the way. We have an AI assistant that allows you to basically say, hey, I wanna, for example, tell me the key takeaways of this video and it will just go and analyze the video and give you a summary. And you can prompt it however you want to, ultimately. And the last bit is that you can also make screenshot based annotations. So you can say, I wanna get a snapshot of the current frame, including a timestamp and make a note there, because sometimes you wanna capture it, for example. Yeah, maybe an important like graphic or so they used in the video, et cetera. And on the last piece of the personal organization stuff is that you, since two days ago, you also have now an obsidian and loxic integration that automatically syncs all the things you save and annotate into your graphs. So if you use any of those, you can just go, how I learned to stop worrying about nuclear waste. You see here, it's automatically already there. It's actually like super snappy. So if I add a new note here, you'll see how fast it will be here. Oh, that's it, like it's very fast. Luckily because, well, obsidian, obsidian's great infrastructure and architecture is so close to the file system that it's just very fast. And we just save our updates to the file system, which is in this case, a big advantage, which also would be a big advantage actually to use something like that with IPFS powered tools because you just need to write to the disk and that's it. We have a kind of a local backup helper that allows you to really quickly save anything to local disk with it. And it's gonna be also a bit of a jumping point later for API connections that you wanna do somewhere else, maybe in other apps that you wanna integrate. We also in that local backup helper had a while ago, a little prototype for actually hooking IPFS natively. So if someone wants to revive this and make memex more natively working with IPFS, just hit me up and we can chat about it. So yeah, these were the, oh damn it. You didn't actually see my, I just realized you didn't see the sync to Obsidian because that part of the screen was not shared, bummer. But yeah, it's very snappy. It's basically going there and I wanna show it because it makes a lot of sense. So for example, if I add now a note here, this is always Obsidian, if I add a note here, you see how fast it's here. It's really quick, it's like instant essentially. And here are all the notes and the screenshots that I made already before. And those are also links to the YouTube video section. So you will always get back from that particular timestamp because it's all marked down, yeah, interoperability. Great. Yeah, so the last bit I wanna show is how do you collaborate? And this might be for people who have this workflow of needing to often share, for example, commentary on the things you read. Maybe you wanna discuss a paper in depth with other people and you wanna have a quick way of doing that without needing to spam your chat logs or without copy pasting the content of the paper into a Google Doc and starting there, which I heard a lot of people do. And so in order to start annotating a page together, the only thing you need to do here is really pressing on share page. What that will do is create a link to that page with the annotations on that page that you had there. In this case, the annotations that I already created before were private, so they're not automatically added there, but I can add them there. But all the annotations that I'm now adding while I'm in the school we call focus mode, will automatically be added there. I'm just doing this right now quickly. And then if I take that link here, I can invite people to either have read access or have contribute access. And when they open this link, they will get to our web reader, which is a renderer for the annotations that they can use, even if they don't use memex. So this is the view that someone sees that does not use memex as the extension. They can see your highlights. They can even make their own highlights on top of it. See that? Had a comment, whatever. And they don't need to install anything. You just send them a link. It's basically our design objective was making it as easy as working on a Google Doc when you want to collaborate with other people. That's the workflow for just one page. If you want to, for example, share an entire research collection. So say, for example, yeah, you want to dive into some new machine learning technique and you want to collect a bunch of papers, a bunch of websites, a bunch of videos, you can do that by using the spaces I hinted at before. Let me find one, for example, here's one where those can also be shared. You can open them in a web view. This is links that people can open, even again, if they don't use memex. They see all your links here and see all the annotations here. If they click on those results, they get to that reader I showed before. And they can also summarize the article straight from here so they can get a kind of a skimmable overview of the things you put in without needing to read every single piece that you put in there. Yeah, that's it. That was memex. If you want to get started with it, check it out at memex.garden. So memex.garden, this, there you can download it. We're actually just about to start like out of our closed beta. So this is really timely to present it today. But we already have to sign up in logins and downloads open already. So check it out. Enjoy. Up next is Clemente with Zama. Hi, today I'm presenting you FHIVM. This is a project we are working on at Zama. So Zama, by the way, is a company working on homomorphic encryption. So homomorphic encryption to summarize it is the possibility to compute over encrypted data. So FHIVM is one project dedicated to integrate homomorphic encryption and computation over encrypted data directly in an EPM. So today I'll show you an example like a classic ERC-20 contract. So basically to use what we've done, basically you need to import like a library available on NPM. The ZADR runs the FHIVM. It's that we added some pre-compiled contract to compute over encrypted data. So if you take a look at our contract, we have like the total supply, which is not like a classic Uint 256, but non-crypted Uint 32. And the same for the balances because ZADR, it's that all balances of every user are encrypted. But still, if it's encrypted, we can still do transfer, mint, and et cetera. So this is like a classic ERC-20. Like the few differences we'll see is basically like the way we mint, for example. So if you mint an amount of token, the user will send a non-crypted amount. So this is a non-crypted amount with FHIVM. We need to validate this amount. So to validate this amount, we'll check the zero-image proof that it's a valid safer text. And then when we validated the amount, we get like an encrypted Uint and we can add it directly to the balance of the contract owner and also, of course, to the total supply. If we take a look at the transfer method and you'll see that it's really similar of what you would do for normal contract. So if you want to transfer some token, you will send an encrypted amount to someone. So first you need to check that the user has enough token to transfer. So this is like a require, but at this point, you need to do some decryption because if you want to check the amount regarding the balances of the sender, you will get encrypted Boolean. So this is encrypted. You don't know if the user has enough token. So you need to do a decryption. So at this point, we will do one decryption. So if we decrypt something, we are leaking some information, but the only information we are leaking is this user has this encrypted amount on this encrypted balance. So basically we don't link much information, but in other case, it could be a problem. So this is the require and then you see the balance is like what you've done, like what you would done with a classic transfer. So you're just adding and removing amount from the balances. We can do that because we are using the Soliti 0819, which allow operator overloading. Behind the scene, this sign is interpreted with a pre-compiled function doing homomorphic computation. So this is like, it looks like classic computation, but it's really like a homomorphic encryption behind the scene. So we can test it. So this is like a classic remix instance because we don't change the compiler. Like it's a classic Soliti compiler. So you can compile your contract. There is nothing new because like when you are using TFHC library, basically we just call pre-compiled. So the only need is you need to deploy it on an EVM with this pre-compiled. So you need to deploy this on a FHCVM, basically. So we'll switch to Metamask account, which is connected to the ODEVnet with FHCVM. And we can deploy our contract. So when the contract is deployed, the first thing we want to do is to mint the contract. So we are using a specific version of remix. Basically it's exactly the same as remix, but we have did like a small tool to encrypt data on the fly. So for example, I want to encrypt 1,000 token. So I will just tip 1,000, but in fact it will create the ciphertext with the proof in remix directly. So I can make a transaction. So I asked to, I sent to the EVM, basically I want to mint an encrypted amount. So no one knows how many token I asked to encrypt. And if you look at the transaction hash, so this is my mint transaction. And as you can see, like the input is fully encrypted. Like there is no 1,000 appearing anywhere. And so next step is to check the balance of the owner. So if you want to check the balance of your owner, you want to be sure that you are exposing the balance of someone, because like if you are doing a call, this is not authenticated. Like you can ask any balance and put on to be anyone. So to do that, we are using EIP 712 token. So the idea is when as a user, you want to allow a DAB to access some information, you will sign a public key and you will send this public key to the function. And then we have a method called re-encrypt. You will send the ciphertext and the public key. And we will do a re-encryption because the whole EVM using the same FHG key. So you will do a re-encryption with the user key. So to be sure that it's a valid user key, the user need to sign it. So for example, I will use the balance off. So we added a public key signing. So the DAB asked me to sign the public key. I add the signature. So now I can call with my public key and the signature of the public key. So the contract knows that I'm really the message sender and it allows me to re-encrypt my balance. So it will be the same for the transfer. So if I want to transfer some token to a certain account, basically it will be the same. Let's say I send 200 and transact. And confirm, so let me switch back now. Okay. I didn't switch back, so okay. Yeah, okay. I didn't switch, let me do it again. Okay, I need to mix two. Okay. And now I have 800 token on this account because I transferred 200. So that's all, I think it's, if you want to try this, this is already available in DevNet. We just announced Alpha version of FHEVM. This is available on Zama FHEVM. You have a, can keep quickly the link. It's, you have all documentation there on FHEVM. You have all documentation and you can already play with FHEVM and try to build smart contract with encryption included. And that's all. From Dagcast, we have Alan Shaw. Everyone. Nice to, nice to be with you again. I'm here to talk to you a little bit about this new thing that's coming to web-free.storage. And that new thing is called content claims. And content claims are kind of like find assertions that about a piece of content. And so they can say different things. So let me, let me share my screen because that would probably help before I just chat, chat, chat. And there we go. Content claims. So there's various different types of claims you can make. So you can kind of think of content claims like, so in the DHT you publish like provider records, which say that this, the ID is provided by this particular peer on that work. Well, content claims are kind of like that, but they're UCANs, they're signed by people, but they say they can say different things. Like they can say that this particular content can be found at this location in this car. They say, they can say things like this particular piece of content can be found in this, in this car, in these particular car shards, which is a partition claim. And they could say something like this particular car file includes these set of blocks and the includes CID here would be like a CID to a Carv2 index, for example. So there are new things that are coming to our new APIs that we're launching, hopefully near the end of this year, which will be fully UCANified, but we do have, we've kind of retrofitted them to the existing API. So you don't have to do anything, but whenever you upload anything to web predot storage right now behind the scenes, what will happen is we will actually generate some content claims for your content. And so what I'm gonna do today is I'm gonna upload a piece of content and we're gonna just explore the content claims that got generated for it. So I'm gonna use my account, which I have loads of stuff in, can barely lift it all out. So I'm gonna try, but what I'm gonna do first is I'm gonna take a little photo to prove that this is real and live. So here's the photo of me. Yeah, there we go. I'm gonna upload this and I'll put it on my desktop first. I'm gonna call it like mugshot plan. There we go. Let's get rid of that. And then what you can do is you can just, like drag and drop your files and they get uploaded with a, we have a small bug here, which is actually off by like a little bit. I don't know, it seems to upload and sometimes it doesn't, but hopefully it's uploading. All right, there we go. That was better. Okay, so here's my mugshot here and I've got the CID of my photo. I should just be able to go to w3s.link. I would help if I could spell, not history, what are we doing here? w3s.link slash, and that should hopefully be my face. Lucky you, people are unlucky. Anyway, no new things so far. This is just what you can do with the story right now, but behind the scenes what's happened is a load of content claims have been created. And if you go to, what is the URL? Graphql.claims.dag.house right now because it's not officially launched. We get like this Graphql interface, it's really nice. And what you can do here is you can just explore content claims that have been generated for a particular piece of content. So if I just put that CID in, it's helpful if I have the content, but also if I include the type name and run that query, then in theory I should be able to get back a list of the content claims that have been created for that particular CID. So we can see here we've got a petition claim and something called a relation claim. And so a partition claim is basically saying that this piece of content is found in these car CIDs. Like it's been put in a car file and sent to us. And sometimes when the content's really big, the DAG will be split into multiple car files. So a partition claim can just basically say, this DAG can be found in this set of car files. And so I can do a dot, dot, dot on petition claim. I can actually list out the parts here. And you can see here we've got in this particular CID, there was one part, it's one car file. And then if I go to CID.IPFS.io and paste it in, you can see that that CID is actually a car file, a content-adjusted archive. So this relates, this CID addresses a car file directly. It's the CID, the hash of a car file. That's pretty cool, that's all right. But then we can go further. Oh, what you should be aware of is what's really nice is you can actually look at the email here and claims and you can like drill in and see like all of the information here is really nice. Anyway, that's what I was saying. So you can go, you can just, you can keep going. So I can say, well, I've got this car CID. What are the claims that were made about this car CID? So if I put claims in here and I can see if there were any claims that were made by for this particular car CID, so I'll throw in that. And then we get back the same thing, but look, we've got like an extra piece here which says that this car, there's an inclusion claim here. And an inclusion claim basically says that there is another CID has some information about what is included in this piece of content. And with inclusion claims, what you can do is you can do on, on and there with me. I'm not super good at typing and inclusion claims include. So you can ask for the CID, the CID of a thing that will describe what this CID includes. And in this case, this will be a CID to a Carby 2 index. It's a Carby 2 multi-hash index sorted index. So an inclusion claim is basically saying this car, for this car, this index has information about what blocks are in it and what byte offsets you can find them, that's pretty cool. So yeah, and then we can keep going even deeper. So claims for this one, let's have a look at type name. If we do claims for this one, we've got another partition claim. So partition claims again, we're saying that this particular CID is found in a car path or part. So let's have a look at the parts. That's where this is like as deep as we're gonna go. When it comes in, you can see how this is, oh, I need to do dot on partition claim. There we go. Sorry, let's go zoom right in my way as you can imagine. There we go. So I think that's good now. So yeah, and that is again, a car CID. So in theory, what this is saying is that this car BQ index can be found in this car CID. And I can actually go to the gateway like W3S dot link, IPFS and put in this car CID and it will download it and put it in like here. I'll just put it on my desktop for now. And that car should have a car BQ index in it. Luckily we can prove it. So let's do that. Ah, zoom get out of the way. I need a terminal. There we go. Here we go. I've got this, make it bigger. Make it bigger. So I'm on my desktop. My time, do I have time? I've got time. Okay. I'm back. It's okay. On desktop, we've got, I've got this handy kind of IPLD Explorer thing. Explore, oh, yeah, explore, I've got it, it's fine. And what I can do is I can do IPLD, sorry, I can do a import and then what's that car, what's that? The name of that. So if I import, there we go. Oh, no. All right. Okay, so cool. There we go. If I inspect this, then I can see that this isn't actually a multi-hash index sorted car. And it's telling me that these two multi-hashes, these are like base 58 BTC encoded multi-hashes. One of them is, this is probably the directory that it was in a directory. And then that's probably the file. Well, actually it doesn't matter because he's actually byte offset. So he's not file size as are they? Okay. Anyway, one of them is the directory. One of them is the actual file. But these are the byte offsets within the car that you can find these blocks. And so we can actually prove that that is true for my mug shot, wherever that went. There it is. Okay. So that's the ID. I need to somehow get a multi-hash out of that CID. I can use so IPFS tool has a tool CID tool, which allows you to, you allow you to reformat CIDs. It's really hard to speak in type. Do you ever try that? It's really hard. But is it bit-b or I don't know? 58 BTC. So I've got like, so what we're saying is like IPFS CID format me a CID. And the present M, it means give me a multi-hash dash B gives me like tells it to encode it in base 58 BTC. And this is the CID of my mug shot there, which is just, you can see behind, you can see the see-throughs. So then that should come out as that. And then, oh, can I move that to a new window? No, I can't. Okay. Anyway, there you go. So look for Poo 6E is, Poo 6E. So this thing, this particular multi-hash, multi-hash index sorted index is indeed describing the blocks that are in my car. And because I can, because I downloaded this multi-hash sorted index from our gateway via a car file, I can also download the car file that my actual content is in. We had, we have like the very first claim what we explored as petition claim and it said that it's in this content. So if I download this car file, it will have my content in it. Which I could maybe do. I've got two minutes left. Oh, okay. This is previously unexplored territory, but I should be able to download this and then do. Okay. So CD, where did that go? Over here is that one. I put it there. And then, what are you called? That's that one. Whoa, don't show it. So it's the app store. I just want, should just use the command line, like it being a name on it. So that was like five VA, right? IPFS, car tool, list. Great. Okay. That's not quite what I want. Blocks. Okay. There we go. Blocks. There we go. There's blocks. Not quite sure what I was going for there. Anyway, the car has blocks for my file in it. But essentially what it means is that you can use content claims to, to given a like root CID of some content. What you can do is client side, go and use content claims, figure out like what car it's in, what blocks are in that car and what byte offsets they're at. And then just go and get the blocks that you need. You don't have to download the whole, and what's cool about the gateway where you can download car files is that you can actually issue HTTP range requests to it. So once you've got the car v2 multi-hash index, you can actually say, well, I want these blocks and they're at these particular ranges in, in the car, in my target car. So go and get me those blocks. And then you can do like batching to just extract the bits you need. And you can all do that client side. And what's happening in our gateway when we are, because we receive cars from users and we store cars at rest in like buckets, the server doesn't have to do any work. Like we make range requests to cars and that's about it. And last but not least, we have J Chris with fireproof storage. Hey, y'all. So I pre-recorded this because I wasn't sure if I was going to make it in time to start. And it's also kind of orchestrated, but I'm super excited to show it off. It uses a lot of the tech that you're already familiar with, like Alan's pale clock and the W3 clock from Web3 Storage and things like the car V2 indexes are optimizations I haven't done yet. But like when I go look to see what's on my list of ways to make fireproof fast, a lot of it is stuff that Web3 Storage has already done, you know? Okay, so they wanted me to put in my pseudo password and restart my computer before you get the audio. I guess I didn't like run that part of the thing. So I don't know, how about we just do it? We just do it the fun way. I'm going to narrate while you watch the video. So there you go. This is the new release I've been working on. I think it's ready for y'all to write apps. So if you're building apps on something like a Filecoin or an IPFS, definitely check it out. It makes it so like you just think you're right in a React app, but everything's IPLD as you go in. And this is going to be a demo of like one demo app I wrote to show off the experience. So it's a public media gallery. You could do private media and fireproof because everything's encrypted by default. But I want to do public so I could show off like gateway, you know, URLs and all that. And also the login experience. So here you just log in with an email address and then you validate using Web3 Storage user experience. So I think this is good enough for your end users if you're not trying to brand it, right? It's not going to confuse them. You tell them what to expect. They just click one link and now they're into the app. So what happens next in the demo is that I show you an already logged in version just so you don't have to wait for all the sync. But in the background, it's logged in and it's like bringing the data down to the database. Oh, now we're going to take a little break. That's not demo anymore. This is how it works. So we're looking at the architecture. Inside of Fireproof, there's the CRDT that's the pale clock that Alan created that I've helped a little bit with. And the updates come into that. And the cool thing about a CRDT is updates are item potent. It doesn't matter what order they show up at because they carry the event information for the parent that they overwrite in them and allows your stuff to get merged. It allows you to do concurrent edits with multiple users. And it kind of makes it so I don't have to think about a lot of database problems. I can just throw data into my data structure. So data comes into the CRDT and then it gets merged. It gets run through this indexer function that you define where you can be like, hey, index all my documents by ID or by user ID or by title or date created or whatever. And what's special about the way we store them. And this is Michael Rogers prolly trees that a bunch of folks helped with is that no matter what order you do this indexing work you get the same CID for your index at the end. So it makes the replication super efficient and you can do like Merkel diffs on your data and all that. So all right, so that's like how the logical storage works. And then those green blocks coming off the bottom are of course, every time you do an operation on one of these you're gonna create blocks. It's just a stream of blocks coming out for each operation. So we were talking about car files. What I do is I wrap a transactions worth of blocks up as a car file and I put a custom car header on it which we're gonna zoom in on in a second. But as a database, right? Each transaction is a car file. And then you can take a bunch of transactions like the whole history of the database and wrap it up into one car file via compactions for fast like web loads. So you can load your whole client experience from a single URL. So yeah, all my stuff is encrypted by default which is using a symmetric key and that's what that blue wrapper indicates. So it just means that if you wanted IPFS infrastructure to serve it out, you would need to share the key with the backend, but as it is it's end to end encrypted. So just the clients can read it. Inside that IPLD header on my car files is just a list of the rest of the car files you need to get the full database with the oldest one being either the first car file or the result of a compaction. And so we're zooming in on that list here. And essentially the way we maintain consistency is there's also an in browser right ahead log that says like these car files are not committed to the cloud yet and these are and it writes them through web three storage to IPFS. And then as soon as a batch of those car files is available and written, then it updates this metadata header in W3 clock. So W3 clock as a wrapper around the whole thing, this gets unencrypted blocks that point to encrypted car files and it really just cares about the order of operations on those encrypted car files. It's up to the client databases to decrypt and merge once they get them. So W3 clock kind of makes it so you only have to download the relevant car files. And so you don't have a last right wins problem. So yeah, that's like, oh again, yeah. And then on a read, right? You just pull like maybe two or three car files that are all like in parallel heads and merge them in the CRDT and get that database experience. So it's super robust to bringing in data from multiple locations and merging it or from the browser in the cloud. And the last thing you get to see, oh, back in the demo here is how the demo actually works. And this is at publicmedia.fireproof.storage if you wanna play with it. But I'm customizing a demo, but I'm showing off here is since it's already logged in via web three storage, you can do IPFS stuff real easy. So everything happening on the screen right now is encrypted database stuff where I set my like user preferences. This is all JSON data that's behind the encryption key but when I do publish, it just does a regular upload to IPFS. And now this is an archive URL that you can share with your friends to show off your customized photo gallery. In the demo here, I'm just switching over to the other browser to show that it's also logged in and those same galleries are available there. And the last little bit is just some interaction stuff to show how building in fireproof lets you do react style user interactions. So I just dropped some files on. The uploads are almost instantaneous. And then I can create a new album for crowds and take my like AI generated. That's the 90 snails crowd. It's like a Beyonce crowd. So I just made crowds for different artists. I think that one's talking heads. I forget who everybody is, but right. And yeah, I just think it's fun to be able to have that mix of encrypted and public data and like kind of use your browser as a CMS that can publish back to the web. But that's just one use case, right? Like the other use cases that people are gonna build this for in this crowd. Let's say you were putting a bunch of data into Filecoin and it had some structure like you're uploading an archive. I don't know a bunch of media files or something easily build a browsable interface for it where then the diffs go into IPFS and eventually Filecoin via web three storage. What we're seeing here is that the library itself has an open dashboard function that pops open this URL with an import. So you can inspect your application's data and see like, oh, here was a upload of 41 files or whatever, this allows you to get inside of your data and play with it with all your database conveniences. So I'm hoping that people here wanna write apps that live on the IPFS stack. There's no other cloud in here besides web three storage stuff. And here's just another app that uses LLMs to build simulated interviews with potential customers. And the last thing I'm gonna show is a starter kit that's really meant to be kind of like nothing but the Crudrapper. So if you wanted to build a item with a detailed view and the lists and the drill down kind of app, like that's all done. And now you just get to decide what your app is about. So there's a starter kit for you to try if you want. So yeah, thanks a ton for watching the video and hopefully that made some sense and get y'all excited about writing apps. Thank you so much, J. Chris, that was great. Well, so that concludes our October, September, Mother of All Demo Days. A special thank you goes out to all our presenters from Benrock, Mimic, Zonlub, BioProof, and DAG House who presented today. Our next demo day will be October 19th. So watch out for that invite. If anybody's interested in presenting, we'll have more information in the coming week. And I'll have the recording up for those who may have missed today's meeting by the end of the day. So thank you so much, everybody. If you have any questions or links to share, definitely let me know and thank you again. Have a great day, guys.