 Hello to everyone. Thank you for having me here today. My name is Thomas Bertani. I'm the CEO of Oracleize. During this presentation, I will start with a brief recap of the Oracle problem and how Oracleize is trying to solve it. And I will do a deep dive on the authenticity proofs and on how we could verify them on-chain. So we have already seen in the previous presentations what it means, what the Oracle problem really is. So we know that in decentralized networks, like on Ethereum, it's really complicated to reach out to external data, to data which we find in different contexts, like, for example, on the web. But in general, this is something we want because of decentralized applications to have value, because those applications might depend on external real-world data. So it's important to define those three different entities. As in different Oracle models than others, some of those are merged. So specifically, we know that the data is any piece of code that requires the external data. Then we have the Oracle and the data source. The data source is not necessarily the same party as the Oracle itself. It's easy to define the Oracle as the Oracle is the party that sends the data to the blockchain. But the data source might as well be a different party than the Oracle. So we have seen in the Thomson Reuters presentation a model where basically the data source and the Oracle are the same entity. But in general, what we want is be able to reach out to any external data sources without having the data sources to add up to the blockchain. This is because there are so many external data sources available on the internet that it is quite unlikely that in the short term, they will adapt to the blockchain. They will start signing the data and sending it wherever we need it. So the Oracleize solution is basically the one of intermediating. And so ironically, we could say that Oracleize is a new type of intermediary in a world that is trying to get rid of them. So we see on one side web APIs, so any possible existing data source, and on the other, blockchains in general. So Oracleize stays in the middle and tries to make the two different words communicate, so different contexts to communicate and exchange data. So this is what we call a data transport layer. We have seen a lot in previous presentation the problem of trust. So of course, having a new intermediary implies some trust lines that we have to open. We don't want to trust a small startup of 10 people, so this is why we add to provide some kind of authenticity proofs. So a proof of the fact that the data we are sending to the blockchain application is indeed authentic, as reported by the external data source, and not tampered with by the intermediary, which is Oracleize in that case. So in order to do that, we send the blockchain, not just the result that comes from the external data source, but also the so-called authenticity proof. So authenticity proofs are a very generic concept. What we want to get is just some kind of evidence of the fact that the data fetching from the external data source is indeed honest. We want to know that the operator is behaving honestly. This is trivial when the data source signs the data. Unfortunately, there are so many different standards and proposals to sign data on the internet that pretty much none of them is being widely used. Hopefully, this will be solved in the future years, but in the meantime, we know already that some data sources made a choice. So for example, there is an IETF proposal that is being discussed for several years, which is being called the Kavage HTTP Signatures, which is at its seventh iteration now. This is one possible way to sign the data for the data source. There are already several data sources signing the data in this way. Oracleize supports it, so it means that if the data source you choose is signing the data in this way, you can leverage the Oracleize data transport layer and connect this API to your smart contract application straight away. But in general, web APIs do not sign the data. And this is why we need to leverage different techniques. We have seen what Chainlink does with the TownCryer project, which is based on Intel SGX. This is one possible technology which we can use, so trusted computing by Intel SGX. But there are many other techniques that can be used. Oracleize at the moment is using some sandboxing techniques which are backed by software guarantees, such as TLS Notary, and some others that are backed by hardware techniques like Intel SGX or the Qualcomm TE Ledger Nano S attestation that we have seen yesterday here with Nicholas from Ledger and many others. So basically, there are many different ways we can give this kind of evidence and this kind of proof. And what Oracleize is doing is not using a single technology to do it. As we will see in a minute, we believe that that will be a quite a weak architecture. So basically, what we have said is that when the data source does not sign the data, then we need to use a different technology. But which one do you choose? Well, this is a complicated question because every time we use a technology, we are really trusting the attestator. So the attestator is basically whoever designed the technology, whoever is providing the technology. So think about the Intel SGX case. So on paper, Intel SGX is very safe. For sure, it's one of the most interesting and flexible technologies for trusted computing that we have available on the market. But at the end of the day, we need to trust Intel because we cannot verify in our homes if the chip, for example, if the CPU package is really doing what it is supposed to do, we cannot really know if the signature coming from the attestation services is done correctly as intended. And in general, every time we use one of those technology, we are always trusting somebody. So even in the case of the ledger proof, for example, we are trusting ledger. In the case of the Android proof, we are trusting Qualcomm and Google, for sure. So basically, by using those technologies to prove the authenticity of the data, we are not really giving out a trustless solution. We are simply shifting the trust from the central operator, which in that case is Oracleize, a small startup that we do not want to trust, to parties that have something more at stake, that have a strong reputation. Of course, if this is all we have, this will be quite weak and disappointing. As if the solution for the Oracle problem is to trust a single party, then we could simply go to that company and ask them to run the Oracle service. This will be exactly the same open trust line and it will be much more efficient. However, the blockchain was not designed to move the trust away to a single central company, right? Or we will not be here today. So by using different technologies, we can spread the trust and get more than one proof backing that claim. So if we use all the technology we see here, for example, we are trusting four different companies, which in order to provide valid proofs, but backed by tamper with data, they will need to, well, basically they will need to lie all at the same time in the same way, which is quite unlikely. So basically all we are doing here is moving the trust away to different parties by using their technology, but without asking them to run the service. Something that we get asked quite often is, wait, why don't we just ask those companies to come together in a consortium and to provide this as a service? They could simply sign the data in a multi-stake fashion. This will be trivially doable and this will be much easier, right? Well, if they come together in a consortium, this is quite weak because by doing that, they would show to the world that they have an agreement and if they have an agreement, the chain of the different proofs is much weaker as they basically show that they are willing to cooperate. So this is why it's much better to have an independent party which is different from all these different technology providers that run the service. So basically what Oracleize is doing is putting in the Oracleize engine, which is one physical box that we designed, all these different technologies, all these different trusted computing techniques so that we can provide different layers, different levels of proofs and provide this as a service to the blockchain. This is something that today we are managing with a distributed network, so this is something we fully manage. So as we just, we have just seen, thanks to the authenticity proofs, we do not need to trust Oracleize but we need to trust all these different attestators. So the only risk of the central operator of the Oracleize service is basically the service to stop running, so bad quality of service or maybe censorship or many other of these problems. This is why we are working towards a new possibility which is basically delegating the management of the Oracleize engine machines to any interested parties that might be willing to contribute to maintaining the service running. So this is something that will basically move away the governance of the maintenance of the service from Oracleize to a different entity. The rest of the presentation is around authenticity proofs so I want to dive into the verifications of those proofs. So what you see here is the network monitor. This is a web-based tool that anybody can use to verify off-chain the authenticity of the proofs. Unfortunately, this means that once you have a smart contract like this, which wants to reach out to a web API like Crypto Compare, you specify the proof you want like Oracleize, set proof, proof type TLS notary. So here we are saying we want a TLS notary proof which is software backed and based on the trust with Amazon. And we want proof storage APFS. So actually we will not get back the raw proof which will be quite big but just the APFS multi-esh of the same. So as a consequence, it means that it's not possible to verify with a contract like this the validity of the proof on the contract once it receives the callback function. So this is something that is not ideal because basically Oracleize could potentially send back a wrong proof and you will just notice afterwards. So the interesting part is that anybody can check for the last two years and half all the 300,000 proofs we have sent on chain and you will see that there is none which at the moment is not passing unless there was one while it was on stage. But in general, what we want is a stronger guarantee where basically the authenticity proofs can be verified by the receiving contracts. So let's see what the authenticity proof contains first of all. So basically this really depends on the sandboxing technique being used because different techniques need different verification steps. However, they are all basically a collection of signatures and data to verify some kind of attestation, some kind of claim coming from the attestator. Still, those proofs are general purpose so they have nothing to do with the blockchain in their format. We are investigating use cases also outside the blockchain space and they are self-contained. So it means that once you have the proof you should be able to extract the full message from the proof itself. There should be nothing outside the proof that you need to verify the validity of the proof itself. However, given that nature of the proof they might be quite big depending on the data you are trying to fetch from the external data source. So this is why we cannot send them every time on the blockchain because it will be a very high cost. And well, given the gas limit, of course, this is something that it's not possible above certain sizes. Some authenticity proofs that we have designed such as the ledger proof for the random data source and the native proofs like the ones where the data source signs the data are trivially verifiable on-chain. So this is, for example, a very short piece of code which verifies native proofs on-chain. As we expect, this is nothing really complicated but we see there are different steps. Like we need to check that the answer is fresh, that it's authentic and that there is integrity of the proof and other steps. For the ledger proof, it's even more complicated because we are checking, basically, that the trusted computing device is really executing the correct code that we expected to be executed. So you see in the code that there is, for example, a code hash, which is the hash of the code being executed, which being open source you can verify yourself of chain once. There is the public key of ledger, which is the one sign, the attestation claim, and much more. Both of these are already working on-chain and the cost is quite reasonable. So it's around 50,000 gas, 60,000 gas. But for bigger proofs where we have bigger messages, it might not be possible to do it. So in those cases, we have to think of something different. And this is what the proof shield is. So the proof shield is basically a new concept where we take the same trust model that we explained before, we use exactly the same technology to reduce the complexity of the proof verification down to a single signature. So basically, this is a different application running on trusted computing devices or on other science and boxing techniques that we are using so that the complexity of the proof, the proof, the raw proof we had before is the input of this software. And the output is a simple signature, which we can easily verify on the blockchain from the calling smart contract, like with native proofs or with tillager proof. So in this case, it means that the receiving contract receives not the raw proof, but the proof shield proof. So it's a different problem we solve. But when we keep the same open trust line, the security model doesn't really change. So for example, if we have TLS Notary proof, which is based at the end of the day, given its nature, to an open trust line with Amazon Web Service, well, if we somehow leverage a fully locked down provable machine running on Amazon Web Service, which converts this proof to a single signature, then we are still trusting Amazon for that transformation. And the on-chain verification becomes possible, though. So this is something we have written in C. It's quite modular and portable architecture. It can run in different contexts. It will be open source very soon. It supports basically all the existing proofs provided by Oracleize. So you could potentially use, I don't know, like ledger device to run the proof shield to verify an Android proof. But of course, in that case, you open a new additional trust line. So the optimal case is when you use exactly the same technology. And the first prototype will be released on Testnet before the end of the year. And the first implementation will be based on Asabolo's app. So basically, it's running on a ledger device. And the verification costs, again, are in the 50,000, 60,000 gas range. And it can be reduced even more so that basically the transaction verification itself, so the signature showing that the transaction is coming from a given sender, it can be ready the verification steps. So in a version two of the proof shield, we will be able to reduce down the cost of the proof shield proof verification to basically zero, so that it's also really compatible with a really deployed contract, which now are several hundreds on the main chain. So so far, we are the most widely used Oracle service on the blockchain we have sent over 300,000 queries on the mainnet. And today, the range is typically between 1,000 and 5,000 transactions per day on the mainnet only. We have integrations also with other blockchains, as you can see here. But Ethereum is where we started from and where most of the usage at the moment comes from. We are using different technologies and attestation techniques. And this is something that we still have in the workings. It's a continuous research problem. So these are the things we have in the workings for the future. So basically, we will facilitate with Stargate the use in private chains and for in-memory executions and simulations. The proof shield, I've already talked about the test net deployment and the version two that we have in the workings. And new authenticity proofs, the reduced centralization points of the operator, so of us, of our service, via the delegation of the management machines to external nodes. And tomorrow, downstairs in the breakout hall, at 3.45 PM, I will do a deep dive around the Oracleized off-chain computations. And I will show you something which is quite interesting or the using of the same technologies and the same trust model to execute off-chain any piece of code and send on chain just the result, also for non-deterministic pieces of code. Thank you for your attention.