 In te, če se skupavaj, neč čim naredim v komentacij, in in vse te glasbo tudi postočutno bom odložite in po všeč bo bila kaljbe pred vsoj. Zato imamo, če vse je poživati in drugi nebi poloče potročen. So, the concept around RACULIZE has been already explained yesterday, but, briefly, the idea is that basically, we enable the communication of existing web APIs or external context like IPFS and Swarm with any blockchain. Most of the users of RACULIZE at the moment are in the Ethereum space. And along with the data that we send to the blockchain, we also attach the authenticity proof, which is basically a collection of signatures and data that anybody can use to verify that we are behaving honestly, so that while intermediating here, we are not operating in a malicious way. We are not tampering with the data. So the way ORACOLIES works isligtopy drowners. So basically you have a contract like this, which includes the ORACOL Sasha APIs and this is the nowadays contract, very simple one. And you can call out to the ORACOLIES query, Oracleis query to the Oracleis engine. And you specify a data source, which in this case is URL. It means that you want to do an HTTP get request. And you specify the actual query. So the endpoint you want to call in this case is cryptocompare.com. So this is basically the reference to the data you want Oracleis to get for you. Oracleis does it, and it sends back a new transaction. So this is the second transaction. The first one is the transaction that the user does that maybe calls the update method and starts the call out to the Oracleis engine. The second transaction is this one, basically. So when we send back the result and the authenticity proof to the calling contract. So this is a transaction that typically calls the callback method. You see here, you have a query ID. You have the result and the proof. And then you do whatever your contracts need to do with the result that Oracleis has sent back. So basically, what we want to talk about in this presentation is how to solve this problem. So we know that we have different limitations on the blockchain due to constraints of the EVM, like missing floating point support, for example. The gas limits, which in some cases for complex computations restrict the applicability of some algorithms. High costs. For example, we might have something which is doable on chain, but is not really convenient because maybe it end up costing 3 million gas or something like that. So it might be very expensive. And of course, due to solidity in the EVM, we cannot use traditional libraries. Think about financial libraries or all the libraries that I've been designed in the last decades. Also, we know we have no confidentiality privacy. There are new techniques to get some, but they are still quite new. And there are scalability problems of doing everything on the blockchain, which typically is not a good idea. So we want to solve this, and there are different scalability solutions that we have seen with previous presentations. For example, Trubit is one way to solve this via an iterative process that I've been presented already. The approach I'm going to present now is a bit different. It's based on the same security guarantees that Oracleize provides. So it's all about proving that a given piece of code has been executed as intended in an off-chain context without needing to execute the code again. So if you think about Trubit, the way it works is that there is a verification that some operator, some agent needs to do so that basically you find out where the disagreement is on the result. So you need to basically execute the code again and again, which might be impossible if the process, for example, is not deterministic. In the design that I will explain in a bit, what happens is that the execution is done just once, so it's not necessary for the code that we've write to have any special property or even to be deterministic. We will just try to prove that the computation was done in a provably fully locked down environment that nobody could control. So basically what we want is to overcome the limitations that we have just seen, possibly without bringing too much complexity without filling up the block, so without using too much gas, basically, because we want to keep it cost-effective, so we do not want to break the viability of most dapps with transaction that costs $60 or something like that. And we want to keep strong security guarantees or we wouldn't use the blockchain in the first place. And yeah, we want basically verification without execution. So I will show you first how it's possible to do it, so how it works in practice, how a developer can write a computation archive for the computation data source and then we will see some properties. So basically the way it works starts like this, so you need to write down a docker file. Pay attention because the docker file is not used here to give isolation. Historically we have seen that docker does this quite badly, so we have seen different vulnerabilities of docker itself and the installation techniques being used, docker is quite a mess for its design, but the reason why we use it here is that it's pretty much the de facto standard for describing an execution context. So basically if we want to write a recipe that describes how we want to execute a given piece of code, docker is widely adopted and widely used. So here this is very simple, this is like five lines of code. Here we are just saying that we would like the codes to be executed in that reference docker, so Ubuntu, blah, blah. We say how we want to initialize the context, so we need Python and here we are just describing with one liner what we want to execute. So here we are saying, okay, I want to execute run int function in Python and I want to print to the standard output a random number between zero and 100. So this is very simple, but it could be as complicated as you would like it to be. So it could depend on external files, on external libraries. It could even just be a binary, I mean it doesn't really need any special configuration due to the architecture. This is a very traditional way to execute a piece of code. What we do next is just putting the docker file and any other file dependency that we need into a very simple archive, so into a zip. And what we do next is distributing it somewhere. So this somewhere could be basically anything, but for now what we support is just APFS or swarm. So what you see here is basically it being uploaded on APFS. As you know, this doesn't guarantee the persistency of the content, so you need to guarantee that the file will be available by keeping a node up and by pinning it or by pinning it to a running instance. So what happens next is that basically anybody could potentially knowing the APFS multihash in that case could download the content of the zip file, could check the docker file and understand what we want to execute. So this is basically what we need to tell Oracleize. So we just say, okay, here the data source is computation, so here we don't want to call out to an HTTP API. We want to execute a piece of code. So you describe next as one argument the multihash or a reference of the zip file. It doesn't really need to be over APFS. It can even just be the SHA-256 of the archive and if for reasons Oracleize already knows because you sent it off chain, for example, what the zip file contains, then it will be able to resolve it. So basically here we just need to reference the code. You see that this enable basically anybody that has access to the correct zip file to verify that we are referencing exactly that content. Here potentially you could specify more arguments in case you need it, but this is not mandatory. I mean, it depends on the code you are executing. If we look again here, we see that there is basically no argument. We are just saying we want to execute to get a random number between zero and 100. If, for example, 100 was to be an argument, then we could refer to it and specify here the input argument. So this is just an execution with browser solidity. You see here that basically the code was executed. The result was 46. And since in solidity we didn't really do anything else, this is, well, we stored in, we saved in storage the random number. So yeah, here you have the random number. This typically takes a couple of minutes to execute and the overhead for the execution is because of the way it works. So at the moment the way it works is quite simple. So it's based on the concept explained on Bitcoin talk, I think in 2012 or 13, the first time, which was called the Amazon Web Service Sandbox. So basically the idea is that by using some APIs provided by Amazon Web Service, we can prove to a third party that a given machine, a given instance we spin up, is really based on a publicly verifiable snapshot. And we can prove basically that according to Amazon, the code that execute, the machine that executed that piece of code is really fully locked down and there is no way for the operator, for whoever created the machine, to get access to it and to tamper with the result. So for example, when you started SSH is disabled, the password of the root user is randomized and so on. And the Docker file is executed in a way that the input hash that we have specified is verified. So basically these are all pieces that anybody can verify and if you trust, not to recognize in that case, but Amazon, then you are 100% sure, unless there are some bugs in the code, that the result is really coming from there. This is based on the assumption that we trust Amazon and this is again, as I explained yesterday, the same trust lines that we have with authenticity proofs. So what we can get to lower the trust involved is using different techniques. So for example, in that case, we could use Azure or other cloud platforms that provide similar guarantees. And basically the user can choose to execute that piece of code on different contexts. We can get the authenticity proofs from each of those and if the result does match, then we know that according to three different attestators, the execution was really that one. So this is a compromise that today basically provides a simple approach to the delegate of chain computation. Of course, this is not as trustless as it could be but it has some nice properties. Basically it being so simple and generic enables any piece of code to be executed with no constraints of using a custom virtual machine or custom architecture. Docker, which is pretty standard, can be used today. This is being used already by some projects and it has a very broad applicability. Like before the last artwork, we were using it to enable some contrast to use an RSA signature verification. Now this is possible to do it natively but it did enable some project to use crypto that was not natively available on Ethereum. It can be used and it is being used by some banks to integrate into their proof of concepts financial libraries that they are really used to. And of course the alternative would just be to have on a centralized server that they manage to execute the same piece of code without giving any guarantee. At least in this way we prove that according to a different party the execution was correct. So we can move the trust away and spread it among different parties. The future works on the computation data source all based on the feedback that we will get. We already got some very good ones so please come to me if you tested it or if you plan to test it. Something that we already have in the workings is as I said the execution in different contexts so that we can provide different attestator guarantees. The on-chain verificability of proofs as we announced yesterday will be available on-chain on test nets very soon, before end of year. So this is already pretty much works and it enables the receiving smart contract to verify the proof before using it, before using the result. Long-running instances are something else that is quite interesting. We already did some tests. So basically for example, there was a project where we tested the PayPal payment being done via this system so that a smart contract could prove that there was a fiat payment with some parameters received by a user and it could maybe release a token or do something like this accordingly. But there are many other possibilities. For example, the A2 project is using this in the decentralized exchange at the air building to prove that the order matching of the decentralized exchange that typically is done off-chain is fair. So for example, you want to prove that you are not doing any front-running, you are not censoring orders and so on. This is something that not even centralized exchange are doing. And then supporting different architectures if you don't want Docker file to describe the execution context, maybe you want to execute something in solidity on the AVM or you want to use the Moxie virtual CPU or other architectures. Thank you for your attention.