 unfortunately, I'm in a bit of brittle state here in this conference so I didn't really have chance to prepare the slides. So, the context of this session is that we as the SWAM team, as the projects scope grew we kind of undertaken the task of providing some base layer services for the web 3. What does this mean? We want to provide some services which cater for web developers to migrate their projects from the web to Stack to Web3. This includes no-to-no messaging as you might have seen the Luis Horbrugs talk on PSS, and also includes some payment solutions like infrastructure to run service networks pomečenje in pomečenje, zelo. And thirdly, very importantly some sort of database support. Now, so basically the swarm team kind of launches some different working groups around different topics, one today session kind of nicely maps to to tudi in tudi povoj blikaj, pri vsej, skupaj nekaj z boj Espera fidula, tako povač tega v komparativno poločnike obovede, kaj igaj 1, tako na swapsvenijo in svindelicu, z papieru in všetkov vspej seda na delaj skup, kRETSA prikaj tudi najpovačne tudi je to forgiveness, to je či dobu iznišlo, koko povjišaj so, imamo very privileged to have them in the audience right now. So they dedicate A Lot of MemPower to this project and it's just launching. So I kind of want to briefly touch upon some aspects that we, that we're trying to target with this project. As you might know, Swam underlining is just a chunk store. Zelo to. The underlying network of storage takes care of distributing chunks of size to particular nodes in the network and retrieves those. All the higher level API functionalities is built on that. For example when you retrieve an asset in your browser, when you retrieve a file, se bovnočne, z kaj se tačne, ovo neč državna rada, in nekaj nezajznačne, z našem njom, začne, začne, začne, začne, začne, začne. I sem zelo, da se dvače in svačne, nekaj nezajznačne, ima se glasba kreče, kde viče in rehtimo da ne ga se zároditi. Na nekaj povolj nadi je tko, da se ima manifest, katero ima buzdanje, da ne ga se zavrža, asočnega, zgodetne od koncentrov in sformu. Se je zelo počask, tako, da bi je izgledljeno zgovanje del vse, je čustaj grač, in Zdaj ta manifest vsezkej izgleda med pustpov, ki jaz nekaj zvuk tudi nekaj zvuk tudi, da je tudi vsezk, da zvuk tudi izgleda prejz, danes začal je obrožena srečna zelo, srečna počka, ki je vedno v urlju in je krajte in izrodno. But v samih, da se začel invarje se bolj, da je začel invarje na dala, ker je počkaj ima rečenje, če je cel pokečenje in vsako, da je tudi da nekaj da se pahne, naey vseč, in vseč, in v drugih. To se je, da želeš, da rečem poveč vseč za odlič Πc in s novimi, vsi prepronemo rečenje in odličenje in s vom, in zene se prono 인zeraj. Vespoj o to, ki je zelo skim, je to dobro več, pretel koosh del je drugo del, kar se z na mene je otvaro pravnte, tako je zašlično izgleda, in drugo je tako odrasen, zčitim prijevama data source. So, one aspect that we would like to work on is to have this provable data basis and previous... ... Bezik bloody... ... ... ... ... ... ... nekaj reputacijski sistem in nekaj nekaj nekaj mekanizm način, da se vsegaši vsega vsega. In je začeljno, da se vsegaši način način način začin. Tako, tudi je počke, kaj smo počkali tudi problem. Jedna soličnja možda je zelo, da pošli však. Však je zelo, da je však stručje, kaj pošli však. Zelo je konfigurabila pačnja. Zelo je konfigurabila, da je pošli však však. ki je 4k, in tukaj je, da je tukaj eskulaj implementacije semplj kompilitivno in soliditivno in tukaj semplj tukaj tukaj tukaj rezolucija je pravda. Tukaj je tukaj tukaj je vse drugi implementacije vsej hrbom, kar vsej vsej hrbom vsej hrbom vsej hrbom vsej hrbom vsej hrbom vsej hrbom tukaj je vsej kompilitivno inkluzirativno na več biljne, bez načine 32-tejkosti. KNDS nekaj obeznosti s vse izgov, ni svoja bljušovana, pa neba vse dobro razbitno. The... We try to come up with the clever serialization scheme, and together with the canonical serialization and inclusion proofs, you can directly map statements like, for example, učnje, kaj je to težave. Stavim, da je bilo pravda, če je to počarti, da se vse pravda učnje. Učnje, da se pravda učnja vsega vsega, da se pravda učnje. Svega je, da se pravda učnje pomembne, da se pravda učnje. in pri svojih vsej objev, in mali stajno da je to zelo vsej objev, zato je to zelo vsej objev, vsej da vsej objev skupi vsej objev. Prežitaj, da vsej teži je vsej objev, da vsej objev skupi vsej objev, Im here is an indexing service that aggregates data sources and produces a common aggregate index. izgleda predstavljenje, tako z Kankun in Mexico City. Protočenje je vsega vsega, vsega vsega vsega vsega vsega vsega vsega. If we find a clever canonical serialization scheme for the database layout and use inclusion proofs, then basically this statement can be translated into an inclusion proof that you can submit to the blockchain and a smart contract can evaluate that particular statement is true. And since that is a mechanistic process that you can, it simply answers yes or no, it can be used in a witness contract, in a swapsware and swindle game. So when the database index registers their service to index a particular primary data source, then they commit to a service contract, which then you can challenge that service context if you find that the data item was not included in the index. So I don't know if this is more or less clear, so that's one aspect that we're trying to integrate. And this is kind of very important because all the other solutions that we see in the space for all these data gateways, including big chain and also IPFS data solution, they are very nice. I will give you some features that are very important in those paradigms, but they seem to have this missing piece where you can actually have provability of the database operations and provability of query responses. So IPLD, for example, is a very evolved database structure, which we definitely rely on, so as is basically an extension of that. What's included in that? It's basically a JSON-like attribute-value structure scheme, which uses the embedded structures in the JSONs can be referred to with the content address of the embedded object, which means that it genuinely provides integrity protection over your data. And the important thing with this content address link is that if you map these data structures into an in-memory data structure, then these links are typically pointers. And using content addresses, these pointers can be directly mapped to references in a distributed storage. So given this database layout, it immediately defines how to store such huge graph databases in this and right storage system. And this leads me to the next important point, which is also going to be a relatively novel feature in our framework, which is that since the pointers are references to the content addresses in the distributed storage, all the database operations or the data structure operations, like merge or delete or add, this is a similar way to how they are defined on pointers as recursive functions across these embedded structures. They simply map to network protocols. So when you define an add or merge function over an in-memory database and use pointers to children to define the next level in the recursion, in the same way you can define protocols over the swarm network to carry out those operations. And this is a very interesting approach because it allows for basically outsourcing to the network an update to the database. So you don't have to pre-compute yourself an addition in a huge data graph, but you just simply send a message to the address that corresponds to the root hash of the database with the request that please add this extra item to the database. And what it does, it simply looks up in its children where the data has to be added and simply looks up a content address that points to that intermediate node and forwards the message to them. So it's like basically instead of recursing on an in-memory structure, recursing as a sequence of relayed messages. So basically as you can see this topic kind of links all the components we were talking about today together because these database operations can be described as PSS protocols basically. And in the same way as PSS messaging is incentivized, these database services are also... When I send a message to update, it's basically a service request that we talked about in the SwapSware and Swindle game context. These are kind of promissory nodes basically that become mature and become rewardable when the update happens. So you can formulate proper verification criteria with the witness contract to verify that a particular update happened. And in the same way as other services, if you fail to comply with the service promise that you index the particular data source, then you can be challenged on the blockchain and you basically stand to lose your deposit as a result. So these are the things that I wanted to mention and sorry, I'm a bit running out of steam now, so I hope you don't mind that I will just cut it short.