 My name is Octina Lasoro, and during the last two years, I've been contributing to the Sundran. And today I'm going to talk about using IPvS to create the metaverse. So let's first talk about what the metaverse is. Well, it's a social network where you interact with friends, people that you meet there in a three-dimensional way. And also there you have a digital identity, which is custom for you. It's the way that people recognize you. What distinguishes the Sundran from other metaverses is that the users are the ones who owns the platform, right? So how do we do that? Well, the world is divided into parcels. And each parcel has an owner. And it's the owner who decides what to show on the world. They could choose to set up a scene, for example, for a casino, for a bar, or even a music festival. And how we do that, we have deployed contracts in Ethereum to check the ownership of those lands and the worlds. And to check the ownership, we use the graph in the backend services. So we have the contract for lands, which is where you can own a land. And then let others have permissions to deploy some scene on the world. We also have NFT collections as wearables, where the wearables' creators can mint a collection and then sell them in the marketplace. And then you, as a user, you can choose those wearables, buy them, and then set up your avatar. That's where we store them in the user's profiles. We can also have a name in the center line. And that's the identity that you own. What happened with that? There are a lot of files and it's too many assets to store scenes and free models and wearables and pictures. So we need to store them in somewhere. And we have to do that in the centralized. So what we have are the centralized servers, which store all the data that the client needs to run. The community owns the servers, which means that there is the decentralized DAO, which is the responsible of approving the list of servers that are in the DAO. So they have to synchronize between them because we have all the content replicated on each of them so the client can connect to any of them and we'll get the same information, right? The way we do that is every server, which we named catalyst, is doing a polling mechanism to all the others to retrieve the files and the entities. This works okay, but we wanted to go a little bit further and test two things. The first thing is that, as you may assume, we have lots of data and we have all the historical data, like all the changes that have happened on a scene and also the way that the files are replicated together. Well, let's first talk about the historical data. What happens? For example, this is the Genesis Plaza for the Zandran, which was away in 2020 and it was a change in 2021. So the servers need to retrieve the content, the latest content. They don't need to serve how was the Zandran two years before, right? But we want to store that data as a backup and for example, if you want to run the world, how it looked a year before, if you run a full node, which means that you have all the historical data, all you need to do is enlarge your disk and everything will work okay. The only thing is that you will need two terabytes of disk, but if you want to run a light node, then you can enable the garbage collection, which is a mechanism that deletes all the files from the entities that were written by new ones, right? Okay, but what happens if all servers then enable garbage collection? Then we may lose those data and we don't want that. So our idea was to set up a node, an EFS node, connected to a server which listens all the network and listens all the changes and stores, well pins to IPFS, all the files that are synchronized. So first we uploaded all the files and then we set up the server to listen all the changes and automatically pins the new files. And the other thing is the files replication, which what is that? Well, the way that we share those files between the servers. We are doing it by HTTP request and it's a full match to policy, right? Because every node is talking to every other node. And our idea was, well, what happens if we use IPFS and we leverage that? So we don't have to care about synchronizing files, we only have to care about the valuations that we need to do to the blockchain and the entities and the way that we need to retrieve those files. But we only will need to know the hashes that we need to pin and then IPFS will do all of that part themselves. So that's our idea on a trial to make something different test if we can leverage IPFS to work that. That's all. Thank you. I was just wondering, could you speak a little bit about your decision to use IPFS instead of RVV for storing historical data? Instead of... RVV for permanent data to store historical data? We currently have the data stored locally like in the file system. We have the way to configure that and store if you run another and want to store them on S3, you can. We have been thinking a lot about the way that the catalyst sync each other and we've been saying a lot how IPFS does that. So that's why I think we chose IPFS to test this. But we have done so far, it's only the historical data, right? We haven't done use the IPFS to run the sync. How do you handle content moderation or cases where like certain content or files might be illegal in certain countries but would be valid in other ones? Well, the way that we do that now is each of the servers has an owner. It's their responsibility to deny lists some entities or files or the thing that they need to moderate. So we are not taking care of the moderation ourselves. We're letting the owners of the catalyst do themselves. We only provide the mechanics for them to remove those files and don't serve them. Thank you.