 I'm Sanjay, I work at Intel and at Intel I constantly think around how we can contribute to Ethereum and I'm one of the co-chairs at Enterprise Ethereum Alliance and as part of that we released the client spec which, you know, requirements for enterprise clients, Ethereum clients as well as for scalability and things and privacy which are very important in that space. We are exploring various solutions and one of the solutions that we are exploring is around trusted compute and we I believe have the EEA has released an early version of that spec and we just want to introduce that briefly and then also give a perspective on the work that's happening in the ecosystem by various partners on using trusted compute so that like, you know, connecting the dots for people, you know, between what is trusted compute and what kind of things it's being used for. So we have a packed session, roughly six presentations and a panel and hopefully it will be fun as we go through this. So as I said I will just talk about the work that we have done briefly and then, you know, call it a presenter so one by one. So like, so just a brief, you know, what is enterprise Ethereum in case people here are not up to it. Enterprise Ethereum was a year plus ago several of us got together and we realized that, you know, there are a lot of demand from enterprises using Ethereum based in a blockchain for enterprise applications. There were several clients and what we were seeing is like some kind of fragmentation and things like that and not a cohesive way to address enterprise requirements. So that's why this enterprise alliance came about and I don't know why I kept that. So early morning. So the thing is like, you know, there are four things about this enterprise alliance really is that one is defining standards around like, you know, how enterprise clients, you know, work, interoperate and things like that address specifically some of the requirements of enterprise clients and I will talk about them in the next slide. We want to leverage, build upon all the work that is happening within public Ethereum on scalability privacy and, you know, embraced all of that and extend that and we'll talk briefly about that. And finally, you know, great interoperability and certification at all places where it makes sense. So with that quick overview of that, when we talk about enterprise Ethereum, there are several college challenges, college requirements. Privacy is very important in the sense that you might have, you know, four as an example participating banks on a on same chain, but the transactions between two banks don't need to be disclosed to other two banks. So that's a requirement. Scalability more much more than, you know, just a sheer amount of transactions. We need higher scalability and 15 transactions doesn't do it for us or 10 transactions. But, you know, we are trying to do our work and build upon whatever the foundation does and permissioning in the sense that being in the enterprise network, there is this requirement in the sense like we see hybrid networks in the sense that there are certain actors that will be permissioned and have roles in terms of maybe there are certain subset who can deploy smart contracts, there are certain subsets who can, you know, initiate certain transactions. But then there are, in case of hybrid networks, we do see like, you know, certain people from the public side of the world interacting with this. So we have this to figure out what the identities are and how they, you know, are enrolled into the system. The other part of it is like interacting with the external world in the sense to initiate transactions, you have to make sure that your wallet or the keys that are used to generate transactions are very protected from the enterprise perspective because you are basically representing someone when you are initiating a transaction. And then the other aspect of the connecting to the outside world is, you know, bringing information from various oracles into the system, be it price of commodity or, you know, whether in certain place, whatever that might be. So how do we bring all of that in a trusted way into this, into the implementation, into the deployments. So scalability, right? You guys probably have heard all of these things. So I'm not going to repeat. So the point really here is that we are, you know, going to use these, you know, very excited to see this Ethereum 2.0 or Serenity as Vitalik talked about it yesterday. And so wherever we can use the beacon chain is very exciting. So we will use these sharding plasma when they become more, you know, robust defined where possible. And but what we are saying really is that we have defined another one based on a trusted compute that we are working through. And that's what we're trying to connect here with the rest of the foundation. So like, so it's a little bit about trusted compute to level set, you know, what it is. Because like, at least in my experience, many people have different notions of what trusted compute is. So I'll tell you my definition based on which, you know, at least I have been working. It's really a place, what we say is where you can execute a piece of software without any external influence, interference. So what that means is like, you can divide a system, be it a PC or a server into a trusted part and an untrusted part. So in this picture, the trusted part is in the blue, and around it the white is the untrusted part. So what we are saying is that if there is something executing inside that blue, it is integrity protected. Meaning like, you know, if you do one plus one inside that blue, you know exactly one plus one happened. Why? Because the underlying system is giving you those guarantees. The other aspect of it is like, if you have any keys in there, you can pretty much be sure even if you may have malware running in the white part, the untrusted part, your keys will be protected and cannot be, you know, stolen. And as a result, if there's any data that is sent encrypted into this blue box from outside, it cannot be, you know, decrypted outside. I guess you can do DOS attacks in the sense like you can drop the packets and things like that, but you cannot fake it into thinking otherwise. And the last thing about this container is that, you know, you get an attestation at the end of it what has happened, what code I executed, what inputs I got and what are my outputs. So you get that kind of, you know, response. So there are various types of trusted compute that are comprehended in the spec. One is trusted computes like Intel SGX. And then there are other ones too like software based based on ZK proofs, MPC. So in the spec that I will share in a little bit, these, we are comprehending all of these because we do believe that it's not, one thing is not a silver bullet to it. There will be situations where in, you may even have to have an heterogeneous deployment where in a single deployment you use MPC for certain things, trusted compute for something else. And so we are comprehending all those permutation combinations. So just a quick perspective on like, you know, how about the ecosystem of the, about the ecosystem of the, around trusted compute. So generally what I want to say here is like, it's been fairly active. Lots of people have been working on in the space. And several of them you will see talk about in, right after I'm done. So a slide on trusted compute. So it was a very interesting experience within the forum going through this because we got to know several perspectives of the people. And the idea of this one was to put it out, say that trusted compute really is, use it in the sense like, you know, you are off-chaining, you have the chain and you are going to off-chain and take computation off the chain, be it for scalability because the task is complex or you have some private data that you cannot share on the chain and you do all of that processing inside trusted compute. So we had several objectives in terms of private transactions. I gave an example of that, you know, four banks on a chain, two are doing transaction, the other two should not know. So that's one example. The other one is like, you know, two banks have data sets that they both want to combine and derive some inference from that, but they don't want to give up the control or really, or, you know, keep the confidentiality of the data that they have. So you could, you know, use trusted compute in that setting. The other, so these are the two that we have some extent comprehended in the current version. The work that we have to do is around, the foundation is, you know, will support this attested oracles, but we have to do that and that's in the upcoming versions we will address that. We are also looked at identities and besides we had a good conversation around, should it be identities based on just the Ethereum addresses or should it be decentralized identities as defined by a decentralized identity foundation. So generally we are going to explore that. We are supporting what DID work is being done in the industry, so more, you know, discussions on that will be upcoming. And then the other one is this version, if you look at the SPAC, you can see that some of the APIs are not something that you would use them on public Ethereum because they do consume high gas. And that's one of our upcoming objectives in the next version is to actually go through those and clean up and have them aligned with what is expected on the public Ethereum side. And the other side of the APIs is we are making sure that it doesn't design anyone out, be it different implementations of trusted compute, you know, DE or different implementations of ZK. So we are trying to do, that's definitely one of our objectives and also to comprehend all the three forms. So this SPAC was actually, you know, early version 0.5 version of the SPAC was announced yesterday and we do, you know, our hope is that you guys will go check it out from your perspective and provide feedback into the SPAC. Our desire or our objective is the version 0.5 that we announced like this week. We will take it to 1.0 by, you know, Q2 of next year when there will be a next update of the SPAC, as well as work on some other new features that we have, you know, not really had time to explore. So that's what I was trying to hopefully give you a little bit insight into what we have been doing at EEA. And now, you know, we will have like five more presentations to talk from perspective of various EEA members, like, you know, how they have been using trusted compute and things like that or how they plan to use it. And then finally, we will, you know, also have a panel and hopefully make it more, you know, bi-directional at that point. So at this point, I will, I think Nicholas, are you around? Nicholas. So I'm Nicola Beckham, CTO and co-founder of Fledger. So I can, I will tell you a bit how we have been using trusted compute during the past year. So we have a use case which is a bit original around trusted compute because ledger has been mostly known as a hardware wallet provider so far. But we realize that the main problem, the main issue when people wanted to use a hardware wallet is that, well, they have to buy the hardware wallet. So trusted compute in our trusted compute for us is a way to think about how we can basically change that. And if there is a way for people to download a hardware wallet on their existing computers. So we spent one year thinking about the different security threats that could, that we could have on trusted compute and see how we could virtualize as much as possible hardware wallet. So if we look at what a hardware wallet is doing today, we'll see that we have four different properties. So hardware wallets should be able to protect, first it should be able to protect keys. So when we do an operation and key, an attacker shouldn't be able to access the keys. So that's one of the most important properties. One difference here is that a hardware wallet should be able to protect against a physical attacker while we cannot really guarantee that with trusted compute. So that's something we are okay with. I mean, if an attacker has physical access to that device, we are okay with that. Now after that, so another property is that a hardware wallet should be flexible. So if we have a wallet embedded in a trusted compute environment, we should be able to update the wallet in all kind of situations. So even if the device is compromised, or even if we have a new DAP, we should be able to load that DAP in the environment. And we should be able to, well, we should be able to update the wallet to work with a new use case that we have. So that's very important for Ethereum because we have a lot of DAPs. They all have a different interaction with the user. And we want to make sure that the malware will not be able to make you do something else with your DAP. So basically sign something that you didn't want to do and, well, use your funds to do something else. Then we want to make sure that on the hardware wallet, we get a user confirmation. So that was something that was missing from our stack before. And I will tell after that a little bit more how another Intel API helped us to solve that. But today, if you want user confirmation is paramount when you are spending money because you must click and you must make sure that the user cannot be faked by a malware so that this click cannot be faked by a malware. And, well, I had another point. So I'm trying to remember it because not the right version of the slides. But if I think about it, I will tell you later. So on the things that we have done, we managed to guarantee this portability between hardware wallets and trusted compute by using an architecture which makes an abstraction of the stack. So today, when you write an application for ledger products, you write your application in C. We consider that C is portable enough for people to work in several environments. So either you are working in a native environment like a secure element, and in this case, you are cross compiling to ARM. If you are working with an enclave like this one, we cross compile to a virtual CPU called Moxie, which has been well studied in the Bitcoin environment already. And the virtual machine is very simple. And so you can guarantee that the virtual machine can be tested and can be certified even if you feel the need a bit later. So now moving back to where we were in 2017. So we did an initial release, which was not very successful because one of the problem of this release is that all code that you developed for your hardware wallets or for your new application running on trusted compute and SGX had to be signed by ledger. And we want to have something completely decentralized. So we want to have something executing securely that you can sign yourself on which you can trust your certificate. And that's not something we were able to do because we had no way to gather user input and to make sure that user input was executed correctly. Then we heard about a new API from Intel called protected transaction display that we used typically for several things. So the first thing is that with protected transaction display, you can get user, you can display things to the users that are hard to fake and you can gather user inputs in a trusted way. So today the user inputs you can gather are pretty simple. So you have a pin pad and the pin pad is swapped on screen. So you can't really know what the user has selected if you are malware. But if you are an enclave, you are going to get the right pin that the user type. So it's a clean and it's an easy solution to gather secure user input. And we are using it for two things. So we are using it first to get a user confirmation if the code is not signed by ledger. So thus we are solving the first problem. We can run unsigned code. We can run code that we didn't trust, that we don't trust. And we can limit the features that this code is going to run if it's not signed by us. So here we talked about the attestation before. We can have an attestation saying, okay, the code is running, the code is trusted by ledger running in a trusted platform if the code is signed by us. If it's not signed by us, it can still use all properties of the secure environment without the attestation. So that's an example. But that's something that's very customizable. And the end result is that people will be able to design their own code without getting a signature from ledger before. And the other obvious use case of trusted compute, sorry, of PTDs, of protected transaction display is that we can use it to display the transaction. So when you're interacting with a smart contract, we can display the transactions that you are sending. And then we can use this as well to load new UIs if you are using a new DAP. So today, when you're interacting with a DAP on MetaMask and on another wallet, you are getting, if you send some data, you are going to see that you are signing a blob. So you don't know what this blob is doing. And you need some kind of other, another UI to confirm and to make sure that you are doing the right thing. And PTD will allow you to solve that along with a platform like ours, that let you run, that let you load dynamically your new code for your DAPs. So you could have one small piece of code, one small piece of script for all your new DAPs. And this way, we solve all kinds of usability problems and security problems at the same time. So that was a short presentation of where we are today. I wanted to publish some code, but we want to make sure that security-wise we feel comfortable with letting people put real money on it and test it. So what we will do is that in the coming months, we will first do a CTF. So we will publish some code with some material on it and we will let people try to get the error. And so the good thing is that they shouldn't be able to do it. So if people manage to do it, then we will iterate. We will publish a very detailed test report of everything we did with the wallet. And when we are comfortable with that, so that should be during Q1 of next year, we will release our SDK and that will let people download hardware wallets technically and play with their own applications. So that's it for me. Hi, I have a question. So you're using this PTD API. How does the user know that whatever is displayed on the screen, this shuffled pin pad, as I understand, how does the user know it really is displayed by the trusted enclave? So how does a user know that the screen is really, I mean, what is displayed is really displayed by the enclave. You don't really have a way to know that because this can still be filled by malware. So the idea is that you will ask the user to confirm back something that is displayed on screen. So, for example, if you display an amount, you will ask the user to type the amount again using the wrap pin pad. So using the scramble pin pad. And you will collect that back on the enclave. So if the user is able to type that, it means that, well, what is displayed is correct, basically. So you will ask the user to confirm back by entering again what was displayed on screen. But just to clarify, do you also use additional LCD screen on your ledger? No, you don't. You just everything is in software, correct, in this picture. So, host can intercept your keyboard as well as your display. Yes. So I don't see how you can use either way to verify the correctness of the other one. So the host can, the host can inter, in that case, the host can intercept the display. So the host cannot, the idea is that the host can display something on top of, on what is displayed. But with PTD, the host cannot see what is displayed. So you can, What prevents the host from simulating everything? You can, so you can simulate everything, but since the host can't know what is displayed. Let's say you display an amount, for example. In that case, the host can display another amount. So the host can say, okay, you wanted to pay $1, I'm going to, I'm going to tell you are going to pay $1,000. So the host can do that. But in this case, you can ask the user to type back using the, using the scramble pin pad so that the host cannot read either, because since the host cannot read what's in the display. So if you have a scramble pin pad and you ask the user to type back basically that the amount was one and not $1,000, if the user type one and confirm, I mean, you know that, well, you are paying one. Okay, another question. Do you use remote attestation at all for your solution? We are using remote attestation to guarantee that the enclave is genuine, yes. Okay, so what do you think about Intel forcing you to use the centralized service for remote attestation? So for the time being, we rely on it, then we rely on our own attestation, which can be verified on chain. I think that Intel is working on more flexible attestation schemes that will not be that centralized in the future. Would you say that it's critical for you to have IIS decentralized? I would like to see decentralized, but for the time being relying on IAS for the bootstrap and then relying on our own attestation scheme for our scripts that we execute in the enclave is good enough. Thank you. Thanks, Nicholas. I think we're done with the time here. Thank you. The next up is Sid from Weave. All right. Hello, everyone. I'm Siddharth Vaseen. You can call me Sid as well. The lead venture architect with Weave. It's a deep tech company out of Berlin and we are revolving around the technologies of data attestation, data validation and data communication using the powerful paradigms of IoT and blockchain. Unfortunately, my CTO, Professor Sebastian Geik, he was supposed to give the talk today. He couldn't come in, so I'm replacing him today. And the premise of my talk today lies in the foundation of the future I think we all believe in. So it's revolving around the building of a decentralized machine-to-machine economy, autonomous machine-to-machine economy. And how can we use trusted IoT oracles for that? So for starters, since we are here, so for the Ethereum network, I'll dive right into it. Okay. So we believe that IoT devices can be good data oracles for the Ethereum network. But there are two fundamental problems in that. There's lots of compromises right now in how IoT devices are. So there's lots of attacks possible, man in the middle attacks, buffer attacks, which really do not make it possible at this moment to make them good oracles for the blockchain or for cloud applications. And once the data has been produced, the second layer that the integrity of the data itself, data itself is not so secure. It can be easily manipulated. It can be faked, which again proves a problem to make IoT devices good data oracles. So imagine a hybrid car charging and charging using an induction loop at a red light crossing and the wallet needs to pay the amount of kilowatt hours of energy which flew in the car and the equivalent amount of money. But then the car wallet doesn't really know how much or doesn't really know or doesn't really have the guarantee that this was the amount of kilowatt of energy which flew into the car. And there needs to be a good attestation such that the wallet recognizes this and pays equivalent for that. So the problem really is how do we ensure the truthfulness of this value chain and the value chain which we do believe is in that we've elevated data from a resource to a digital asset and we believe that it could be any asset in the physical world. First, when we pay for the asset, we kind of have the guarantee that this asset which we are paying for has either been verified or the quality has been assured by some agency. But that's not the case with the current infrastructure setup of how data is being produced and traded. And even if in the future we do have sort of regulations and infrastructures in place which makes it easy to pay for the data which you're consuming, there would still be too many difficulties in post-transaction data settlement because just the sheer volume or the velocity of the data which would be flowing, it's very difficult to regulate it after that. So the whole problem comes when we have to attest the data from source and this is what Sanjay and his team at Intel are also doing. So having good data oracles but then to have good data oracles, we need trusted data flowing in these oracles and to have trusted data, we first need trusted computing and which is why we kind of follow the analogy of data as an asset which has been harvested from source, harvested in a manner such that it can be trusted, then processed, then transported, then assessed and then commercialized. So this is in brief the technology stack we at Viva are building. So we build on top of the, on top of ARM's hardware extension, the trust zone, a lightweight trusted execution environment enabled operating system which enables the data to be attested at source and then the OS of course at certain properties which allows the sheer compartmentalization of the secure world and the normal world. So you can have all your cryptographic keys, your cryptographic materials in the secure world and your everyday running operations in the normal world. So even if your wallet or your normal world is hacked, your secure world remains secure and your keys are protected basically. Then we have the secure communication protocol. So traditionally IoT devices have used the MQTT protocol which has around seven communication rounds and you put TLSSL on top of it, then it just doesn't become usable enough for this sheer volume of data flowing for IoT devices or IoT data. So we've tried integrating some sort of a simple most secure messaging protocol on top of MQTT. So it's a bit more low latency, just requires three communication rounds. Then the next step is to have a testimony. The testimony kind of proves that the program was executed in the manner that was supposed to be executed in and then finally you can transport the data to the blockchain layer or the cloud layer and then where it can be finally realized that this data is now truthful and it can be traded or commercialized or utilized in some intelligent manner. So these are just some brief features we think are important. That's why we use the ARM trust zone extension at the moment. In the future we plan to be compatible with Intel SGX although it's a bit more compatible for cloud architectures at the moment, not so much for embedded systems which is why we use ARM's trust zone extension at the moment. So it allows us to isolate programs, shield the cryptographic material and also it has a secure boot process plus something we like to call the snags which basically proves that the program was executed in a certain way. So that's the brief architecture of the normal operating system, the secure operating system. There's also a 60-page white paper and I can have it distributed so you can find all the readings as to what each element does after the talk. So yeah, finally the VVOS, it has some built-in functionalities. So a crypto API, secure key storage, secure boot communication protocol and QTDS which is the lightweight protocol I mentioned. Then the Ethereum wallet, it's an extension to basically allow these wallets to pay once the data has been attested automatically and then of course the testimony. So the testimony in simple terms is basically snapshotting micro-instructions at regular intervals and telling the user that the data which came in and which is going out, if it's being snapshotted at regular intervals it can be then verified later on when it has to be verified basically. So we have the GitHub code there and of course we are there on GitHub as well if you want to have a chat with our team of developers as well from Berlin. Yeah, I guess that's all. Thank you so much guys. Could you please explain a little bit more deep like how the attestation system works? Like who are we exactly trusting here? Because it's quite obvious that we are trusting whoever is manufacturing the chip and the ARM trust zone architecture. But where does the attestation come from? Because this is not something that trust zone defines. Yeah, so the trust zone doesn't do that. That's why we build this operating system on top which has the testimony and the testimony takes continuous snapshots of the data which enters the sensors or the processors and this is where the attestation comes in. So these two presentations were more like getting the external data either into the chain through a transaction or through the oracles. The next one is around like privacy and it's Guy here, Guy from Enigma. Hello everyone, I'm Guy Ziskind, co-founder and CEO of Enigma. And I want to tell you about privacy preserving smart contracts. Specifically today there's a much longer conversation tomorrow when I'm going to go much more deeply into the architecture. But today I want to talk to you about some applications especially those that relate to identity. And I'm going to give just a basic overview of the platform but again tomorrow is a talk where we're really deep dive into this. So this should be obvious to everyone in this room. Blockchains are public ledgers. That means that all data that you put in a smart contract anything that you want to process on the blockchain is completely public for everyone to see. That means that if you have an application that needs to process let's say your credit card information that's going to be made available to everyone in the world and that's obviously unacceptable which greatly limits the type of applications that you can actually do on the blockchain and I would even go further and say that there are only I'd say maybe a couple good applications that I can think of that make sense on the blockchain without solving for privacy. So that's really where Enigma comes in. Our goal is really to allow nodes in a distributed network to operate over encrypted data that gives us the privacy. We essentially want to upgrade smart contracts into secret contracts. Secret contracts are just playing smart contracts but they also protect data in transit and in use. If you want to be a bit more technical about these smart contracts provide correctness. That's really what excites us in Blockchains. The fact that if you put a code on the blockchain and you send some data into it you're going to get the correct result and you know no one can tamper with it assuming you trust the model. So that's great and that's why we're excited about Blockchains but this doesn't protect privacy and that's where secret contracts come in. So we have Enigma Discovery. Discovery is meant to be the first network that we're releasing. We have a version out right now. It's public. The code is open. We have several companies that are very well known in the space building on us using our technologies and we're actually making some more improvements and we're really releasing I'd say Discovery 2.0 in the next couple of months with a lot more features and Discovery really means that all nodes in the network have to run Intel SGX. Intel SGX provides us the infrastructure where we can run code securely both for correctness and privacy and I'm not going to get into that here but we don't just rely on Intel SGX. We rely on other things like Ethereum for consensus but the main idea for that for privacy right now we get that from SGX. Other properties that are interesting and this is the first network of its kind is that Discovery is completely permissionless. Anyone can join can become a node and get rewards. It's completely economically incentivized. It uses some kind of POS model and I think what's mostly interesting for developers in this conference is that it's 100% compatible with Ethereum. That means that if you want to enjoy the security of Ethereum, if you want to continue to develop decentralized applications on Ethereum, that's fine by all means we're not trying to compete or to change your habits. But if there are portions of your applications that need to process sensitive data, well we provided the means to actually go from Ethereum to securely compute on Enigma and then back to Ethereum and we make that very easy for developers and very seamless for the users. So the architecture at a very high level is very similar to how blockchains work today. You have not shown here you have smart you have developers who basically write secret contracts and deploy that to the Enigma network and then you have users that communicate with those secret contracts from the outside by sending tasks. Tasks are basically special transactions that include encrypted inputs as payload and that's really and in the middle that's really where all the magic happens in our network. So I think this is a statement everyone would agree. Modern applications require the use of sensitive data. That's not something we can debate or change or even want to change and I'm going to talk to you about one specific example in the identity space that we've been working on together with a company called Data Wallet. So it's pretty common today on the internet that you know before you subscribe to an application before you can use some kind of application you have a gateway. That gateway needs to make sure that you know you're not trying to game the system and that you're not trying to mount some civil attack. It needs to verify that you're a person and that you're one unique person that you're not creating let's say a thousand fake accounts. That's prevalent in centralized systems but also decentralized ones. So we're working on Data Wallet to produce that and we're working on to produce that in a privacy-preserving way that is provably correct. So let's see how this works today and this kind of construct exists. Today if you want to use some kind of gateway like that well that gateway may ask you to provide some of your personal information like your Facebook data right. You're going to send that in the clear to the to the that service provider. That service provider is going to run some algorithm we call it you know bot or not which simply runs some statistical algorithm that checks whether you're a real person or not but you have to completely trust that service in both ways. First of all they can censor you right if they don't like you if they don't like what they see they can say you know what we're not we're not giving you the stamp of approval and you cannot continue to use whatever other service you want to use and what's even worse and more likely is that you give them your social information in the clear. That data then they get it they usually store it and you have no control over what's going on with this. So we want to do better and what we suggest and the way this application works here is essentially just put everything on enigma in an encrypted form where you know that creates a neutral safe ground. The way that this works is that the gateway the developer they deploy the algorithm as a secret contract to the enigma network and on the other hand the user submits their encrypted data into the network and then all of these computations happen in the network over encrypted data. The only place where the data is ever decrypted is obviously inside of SGX enclaves where even the host cannot really probe in and see the data. So in a bit more detail about how this was actually built we have data wallet who developed the front end that's basically some kind of mobile app that you have on your phone. It allows you to take out your social information from Facebook and you get that locally. Then using our library they can with a click of a button actually encrypt that data with the key that only exists in enclaves in our network and then that encrypted data is being sent to the network to an enclave or actually to several enclaves and only then only then the information is being decrypted inside the enclave and the enclave itself runs the secret contract in this case it's the bot or not algorithm and then it returns the result. So the only thing that really leaves the network and leaks is the result which is just like one bit whether you're a bot or not whether it's a fake account or not. Now this is just really touching the surface there's a lot of companies in the space building a host of applications that are needed for the space. We're talking decentralized governance, we're talking auctions, we're talking protecting states and games. This was an example about identity and cyber prevention. We have worked down on decentralized credit and other machine learning applications. It's really the sky the limit if you can really protect privacy and tomorrow I'm going to go much more deeply into the architecture and these use cases and I'm going to mention some real names and show real code and examples of partners we're working with. So I do suggest if you're interested in this you attend tomorrow it's going to be here at 1230 so you're all welcome. Thank you. Thank you. The obligatory question how does the application know it actually uploads the data to and to de-enclave that the application wants to trust? Right so basically what happens is that we have a registration process for every worker in our network. That worker basically creates a new key inside the enclave that they sign they that go through the IAS and provide the proof that this was created. So you rely on IAS and remote attestation. How does it not concern you that this introduces centralization to Europe? How does it not concern you that this creates a centralized point for you? Well I mean what we do is we do it only in a bootstrapping phase so there's a registration you go to IAS once you you know you get they sign the report that goes on the chain and this is right now the best we can do. We would love to see the central as IAS but beyond that process if you trust the process center then you're good. Okay thanks Guy. So next up to give his perspective on this one is Noa from Oasis Labs. Hey everyone so my name is Noah Johnson I am with Oasis Labs and I'm going to talk about what we're building at Oasis. So you know of course blockchain is an exciting technology that provides a number of unique properties and capabilities like you know openness decentralization integrity guarantees but as has been well motivated today blockchain by itself doesn't provide privacy right and this is very limiting this severely restricts the sorts of applications that you can run on today's blockchain platforms and so what we're building at Oasis is a platform that protects data protects data and smart contract state and provides privacy at every layer of the stack so at the application layer we have a set of tools and libraries that allow developers to safely analyze and compute on sensitive data with guarantees that the results of those computations actually don't violate privacy at the platform layer we have a system for ensuring that workers can't view the data or steal the data and then we have a scalability architecture that allows the entire system to support much more complex sorts of applications that you could run today so this is really a top-to-bottom approach all of which I think is necessary to provide end-to-end privacy it's not sufficient to just protect the data on the workers if it's trivial to write a smart contract that actually leaks out sensitive data so this is our approach our use of trusted compute and secure hardware is at this layer so this is specifically concerned with the problem of you know given an open network and workers whom you may not trust if those workers are computing on sensitive data how do you make sure that they can't actually leak the data and so what we're aiming for is essentially a model called confidentiality preserving smart contract execution and and the general idea is that all of the inputs and outputs of a smart contract are encrypted so that nobody in the network is actually able to view the contents except the smart contract itself so this means no nodes can view the data the gateway can't view the data and even the worker that is processing the smart contract can't view the data so how do you actually access this functionality so today if you want to call into smart contract you would usually use existing interfaces like web three unfortunately web three was designed under the assumption that all data is public so the existing apis actually are insufficient and so for that reason we've developed an extension to web three called confidential web three that adds new apis that essentially allow users to construct a secure channel into the smart contract so that they can encrypt the transaction payload in such a way that only the smart contract can actually decrypt it and so web three is backwards compatible with standard sorry confidential web three is backwards compatible with standard web three and so it's a you know it's a drop-in replacement the modifications that are needed to existing applications are very minor we might also look at supporting EEA's trusted compute API as well given that you know it solves largely similar problem so there are a number of different technologies for enabling secure computing these are generally classified into you know techniques that rely on secure hardware and crypto based techniques that don't rely on secure hardware they instead rely on cryptographic algorithms so there are different tradeoffs to all these techniques as Sanjay was was mentioning earlier there's no single approach that is a silver bullet and is the you know the optimal approach it depends on the application and it depends on the performance requirements and the threat model and so the goal of Oasis is to integrate all these different technologies into a single platform and allow the developer to decide which one they want to use and make it very easy for them to have access to these technologies one of the reasons we're especially excited about secure hardware is that of these techniques it's by far the most performance and the most general purpose so the fact that you can run code you know directly on bare metal means that you can run essentially any application and you pay very little in terms of performance overhead so this is the first technology that we'll expose in Oasis if people aren't familiar with secure hardware the basic properties that provided by secure hardware are essentially the hardware allows code to construct what's called a trusted execution environment in which you can put applications and data and you have a guarantee that nobody else on the machine the applications the operating system even the user is actually able to view or tamper with the contents of this trusted execution environment so this provides both integrity and confidentiality and also allows the hardware to generate a certificate that can be verified by a remote party to prove that the hardware is is genuine and you know the right code is running so this goal of sort of emulating an ideal trusted third party right to provide certain guarantees even in the face of you know untrusted parts of the system are very similar to the goals of you know a general smart contract platform right and it turns out that smart contracts and T's have very similar and in fact complementary properties for example smart contract platforms provide strong availability given it's a decentralized network if one node goes down you can still access the network state is stored on a permanent ledger so there's persistence but as we mentioned earlier today's smart contract platforms don't provide any confidentiality for data or state on the other hand trusted execution environments provide weak availability given that you know anyone can just shut off a machine that is running a trusted execution environment and you would lose the application state because T's don't have direct access to durable storage they don't provide persistence but they do provide confidentiality and so the fact that we have these complementary properties means if you combine the technologies together you can get the best of both worlds and that's exactly what we did in our previous research project called akiden so we showed how to run smart contracts inside a trusted execution environment in order to endow the smart contracts with confidentiality and attach that to consensus and a distributed ledger in order to retain all the benefits of blockchain and decentralization so all the details are in this paper we're also working with a project called keystone which is a fully open source design for secure hardware so today the sort of the state-of-the-art secure enclave technology is Intel SGX and it's it's nice in that you know it runs on commodity hardware has very very high performance we want to provide more options to developers we want to grow the ecosystem and one of the things we want to use keystone for is to kind of explore new designs and secure enclaves and bring together researchers in an open platform for figuring out you know what are the limitations of today's designs and how do we improve on them so keystone eventually will publish a specification for a secure enclave that is royalty-free that can be manufactured by any chip maker and as those chips become available they'll be exposed through the oasis platform so developers can choose whether they want to use for example SGX or keystones the idea is to grow the entire community by you know accelerating adoption of of secure enclave technologies and allowing developers to choose which one makes sense for their application a few weeks ago we hosted a workshop at Berkeley specifically around how to design open source enclaves and we brought together industry and academic leaders in the space to talk about you know what are the open problems and how do we work together towards this goal okay so once you can protect data on a smart contract platform this opens the door for a lot of really exciting applications here's a list of some of them many of these are already being developed on the oasis platform things like credit scoring and decentralized exchanges I want to talk about one application that's especially exciting this is called Cara and Cara is being deployed on oasis it's designed to solve this problem of data silos around medical data so today you know medical data is extremely valuable for research purposes but users are reluctant to share their medical data rightly so primarily because of privacy concerns right because they don't know what people will use the data for so Cara tries to solve that problem and the way it does that is by allowing patients and doctors to input their data into a smart contract and researchers that want to train models on that data can submit those models to the smart contract and the smart contract sort of mediates that interaction and the entire thing runs on the oasis blockchain once the bottle is trained the users are paid tokens you know to compensate them for contributing their data and the researcher receives a trained model so the nice thing about running this as a smart contract of course is that it means the patients don't have to trust researchers they know that their data will only be used for the purpose of training this machine learning model and because this runs on a platform that can expose secure enclaves then all of the computation can run in a secure enclave allowing patients to know that their data won't be leaked or viewed by even the worker that's doing this training and this is exactly why secure enclaves are so exciting for these sorts of applications is because it allows you to run very complex application workloads with very little performance overhead while getting those security guarantees so if you're interested in learning more about this or you want to get involved one of the developers of this project Nick is actually in attendance today and he'd love to chat with you okay so that's the end of my talk if you're interested in building a privacy preserving application on our testnet you can go to this URL you can sign up to our twitter to get updates and you know if you're interested in the space and and you want to help us build the system we're always looking for for passionate people who are really excited about privacy as we are that's it thank you thanks thanks noah so now a little bit we shift the gear in the sense like you know in the last two sections it was the idea was to say you know how you could use some of these things with some real world applications that various partners are building and now I will have marley from microsoft talk about like you know if you were to say that okay I want to build something on these enclaves why could I actually get access to this hardware and and and what's environment so marley get my slides up here in a minute so microsoft we're from I work in the azure team and that's our cloud platform and one of the major issues of adoption for clouds is why would I run my code in a public network when you know it I have to trust you microsoft to not steal my intellectual property and then you know do all the evil things that used to do in the past Darth Vader ish so we see confidential competing sort of holistically and say this is super important for us to grow our core business we have to be able to have a cloud platform that our customers whether they're a large corporation or someone building a decentralized application what I would like to call the next generation killer decentralized application needs to have the ability to just in time grab a secure enclave we'll talk about this in a minute to be able to execute some arbitrary code and confidentially execute that and then you know have it its results published on some network of some sort how do we make that possible what infrastructure is required so uh so our main goal is primarily to remove microsoft out of the trust base so you no longer have to trust microsoft to run your data in your code in in our cloud which seems kind of weird but that's really core to us growing our business it's core to not only getting you know our largest financial services customers but also a lot of you folks that are really concerned with you know centralization and and I've been doing this since 2015 and I always get that first question people walk up to me and go what's microsoft doing here you're centralized you know you know it's okay uh we get it but we're we're trying to contribute into the distributed space and well this is broadly applicable so the thing that we've all the previous speakers um talked about is how do you apply enclaves the the news the good news is you can apply it in a lot of different ways that solves a lot of different problems but it's it's not a hammer right it is your hammer and everything's a nail you try to hit it doesn't solve every problem but when you do find sort of an intractable problem having an enclave available to you lets you explore different ways you can solve those difficult problems particularly around a privacy and confidentiality so let's skip through this so we're looking at things like machine learning sql server uh essentially anything that we're running in the cloud is looking at how they're going to use tees or trusted compute to enhance their product to remove microsoft from the chesting let's go to the next slide okay so um so basic things we're doing is sort of broadly we want to have of course this we think this is going to apply anywhere we we're really interested in hearing customers of how they're using it and we want to make sure that we encourage them to do that any of you worked with enclaves anybody's written an app that runs in an enclave go on and show your hands all right a handful of you relatively speaking how easy was it extremely easy no it wasn't you're probably easy now but really you probably look it back you have some scar tissue because it was very difficult um what we're trying to do is it's not just getting your code drawn in the enclave but it's also how do you get your data to the enclave how there's a lot of infrastructure you have to do there so we want to make it very easy for you to get your um your uh to introduce this into your environments so we want to solve those difficult problems um some of the things we're doing we're trying to work with this open enclave SDK um we too share the uh desire not to to provide choice in enclaves we're a huge customer of intel they're a huge partner but um there's uh our customers one other option so we have things like a hypervisor based enclave that's an option working with other hardware providers to make sure that um we provide a a platform that gives you choice in enclaves but that also introduces an additional problem which then you start to say okay now I have to write my code specific to the type of enclave that I'm using and that creates a significant problem so we're also investing a lot through Microsoft research in building what we call the open enclave SDK so this is an open source SDK that we will allow you to write your apps through that SDK and abstract yourself from the actual implementation so you sort of get the the JVM type experience where you can write your code once and not really be concerned with which actual enclave that you're using so that's sort of a pragmatic approach to doing that we'll still have scenarios where you're going to want to write to the bare metal but there's a lot of scenarios where you don't want to do that or it's not the most cost effective thing for you to do so these abstractions are very important um we're talking about SGX and VSM and and others will come across um so let's go to the next slide thank you two basic things that we're trying to do these are examples of frameworks that we're working on to try to allow customers to build killer applications we think that's going to be what really drives blockchain networks we've worked a lot with tons and tons of customers to try to establish networks and to roll these consortiums out and for the EEA it's very difficult to justify rolling out this infrastructure and contributing a lot of the resources needed to to work with your fiercest competitors to invest that time and money to build a network that will smooth operations across that ecosystem it sounds very noble but at the end of the day people that are writing those checks don't really you know the friction in the marketplace as long as they're spread and they're good so what's going to actually be the tipping point are killer applications that are consumer facing enterprise facing industry facing crossing industries so what we need to do is be able to address some fundamental problems in architecture and infrastructure to start having tools and platforms available so you can start to build those killer applications one of them released was called it's called a confidential compute blockchain framework this is essentially a node level data tier optimization it is not a blockchain it is essentially an open source project we'll be releasing soon can't give you the exact date but mark shaking his head the back that will be released soon that essentially provides a framework to use in this case SGX to create a confidential compute consortium right it's not for public this is a private consortium network that will give you a governance framework so it controls who you let into your network privacy so you can have essential privacy and then performance because we introduce trusted compute we can use faster consensus algorithms so it's a framework it's available and then you could take that framework and build on top of it to create high performance consortium blockchains on this platform to really deliver on some of the enterprise promises that we at the EA are striving to make possible above that is another thing in the middle so it's sort of how do I build applications and wrap and give me that infrastructure where I can execute code and logic off chain and then persist that on chain so I usually tell people the I have to explain this a lot and I talk about the blockchain being the distributed truth so most of you will get this because it's the audience and the demographic I have in the room so if I go and look at the blockchain and I look and I see the answer to my question is 42 and the question is what's the meaning of life and everything I don't know that so if you haven't read Douglas Adams Hitchhiker's Guide to the Galaxy I encourage you to do so but this is one of the big themes that runs throughout as you go and you seek this answer you're looking to get find the answer to what is the meaning of life the universe and everything and you finally on the answer is 42 how many of you are going to be satisfied with that answer and the answer is no one is going to be satisfied with the answer even though you know it's the truth it's on the blockchain so what do we have to do to be able to you know accept that as the answer well you're going to go off on a wild track across the galaxy to you know compute all these things you'll come back and sure enough the answer is 42 the next person comes up and they go well what's the answer they see 42 they're not going to believe it they're going to go do the same thing again that's what blockchain networks do right we just compute the same thing over and over again to satisfy that yes the answer 42 the next person comes in they don't trust it so they got to recompute it now that works great but it's not going to solve all of our problems we need to be able to pull things off or particularly the contract if you think of what a contract is it's a trust agreement between discrete parties the only people that really care about the answer being 42 and really the question are the counter parties to that contract so if i can pull that off and write that then write the answer 42 down then you as someone that's on this blockchain you didn't necessarily ask the question you see the answer is 42 and you're like big deal i'm good with that move on so middle tier we think solving those types of problems we call it the the truth resolution tier so to have an option to pull your contract logic and all sorts of oracles and and things like that off so trust to compute in blockchain involves that as well and we address that in the t the trust to compute specification so our middle tier we're trying to make it easy for you to use enclaves in the middle tier so it's more of a pooling getting scarce resources just in time to do certain things that you want done in private let's go to the next one so yesterday i published a blog announcing the release of the enclave ready evm which is a open source c++ implementation of the evm it is it has no dependencies outside of so you don't need to even use the intel sdx sdk it'll run as is actually run outside of our enclave as well some things about it it's not fully functional it doesn't count gas right it is compatible with the evm op codes for homestead let's see it will do existing bytecode things like this but there's some interesting things about this and we built this not because we're trying to compete with ethereum or come out with a ethereum virtual machine that's better than anyone else we had to build this for the confidential compute blockchain framework we had to be able to test this infrastructure so we had to have smart contracts that would execute in a te to see is this actually going to work so what we did is we said okay let's just release that workout because we think it's going to be valuable and let me give you some for examples this type of code you can take it and modify it and you could add in these features because it is decoupled from the consensus algorithm you could use something like the abci interface uh like borough borough uses to have an enclave um this enclave evm run off chain um and then write results onto the chain sometimes we call that even on the public now we'll break it so it must be um check some uh node that actually executes the code and sgx the rest of them don't have to do that um as long as you come to agreement on consensus it would be an interesting scenario um it is a mit license so currently uh it won't go any more restrictive i don't believe so all right next slide oh there there the urls there i don't know if you notice that uh i won't get up so let's go next here's another example this is sort of an i chart this is uh more of an example of how what we do in the middle tier one of the things that we do and um a lot of customers have difficulty in the enterprise of managing keys they want to have key management building blockchain applications for enterprise organizations creates a huge problem in that you're if you need to sign your transactions for each individual person you've got a lot of keys to manage um what we do uh here is uh we use hsms in azure that is a service we call azure key vault uh we frontend that with some enclaves and we can at runtime when you're running an execution of some logic it could be any type of code we call that code running in um that your logic could call a cripplet you create some sort of a result like 42 and you need to sign your uh proof um and actually we're not talking about an enclave proof we're just saying i need to sign this transaction um but i don't want to have my keys inside this code i think uh we use this as what we call crypto delegate we just delegate all cryptographic operations to an enclave um out of the pool and in the future we think this will be useful to create some of these multi-party compute scenarios where we can do rapidly create things like ring and threshold artifacts out of this okay next slide try to stay on time here so big picture um again we're we're building sort of a broad platform um blockchain agnostic we start over the theorem uh but we support essentially all blockchains um go ahead and hit and actually just blow this out sort of give you an example um ccbf is that that lower level so this is the tiering so the data tier introducing a framework that allows you to build the next generation and source them so this is private right uh network um so that's that's down here middleware platforms there's a range of services that we'll be introducing this is language agnostic but you know the intention is to be able to support any language any runtime whether it's wasm dot net core java jvm actually working on although some room actually working on that implementation right now and then we also have an offering called azure workbench a blockchain workbench which allows you to rapidly prototype um your solution so any of you that are trying to build an application uh or to get funding for one or get approval to do it uh but you don't have anything to show for it you can rapidly build an application here uh and workbench and have it build a solution on ethereum or uh ea implementation and go get your funding because that's a pretty user interface and it has all the features and bells and whistles that you want to to demonstrate so it's an awesome tool to to go out and rapidly uh move forward towards your use cases all right that's it for me i think the last up uh for the presentation is lay from iexec and uh so so in this one it's a short presentation and i think a short demo uh to uh you know put uh put some of these things in context uh morning everyone uh my name is lay and i am from iexec blockchain technology which is based in leon in france so uh today i would like to give you a presentation on our iexec end-to-end trusted execution with intel sgx which is the first uh scalable solution for business purpose to secure the blockchain-based computing using intel sgx i would like to start with talking about what is as intel sgx although i know a lot of you guys understand how it works so intel sgx stands for intel software guard extensions so uh basically it's a intel security technology which was available since 2016 and intel sgx is based on the uh secret kings that are fused inside the cpu during the manufacture so basically it's a hardware-based uh security technologies which means it leaves very very little attack in the face for the malicious attackers so basically you can consider sgx as a secured bubble surrounding your application and protective applications from the host machine for example uh if i have a phantac applications uh which contains some of my sensitive data i want to run these applications on the decentralized networks say if the application runs on your machine think about it you are the administrator of your machine right you have hundreds of ways to easily access to this application and steal my sensitive data which is involved in this application so thanks to this intel sgx technology so intel sgx creates a bubble which is strictly isolates the application from the host machine so in this case even you are the administrator of your machine you cannot penetrate this bubble and access to my applications to tamper my applications to steal the secrets involved in this application so based on the sgx technology we propose our solution which provides an end-to-end data protection for the applications running on the uh blockchain based decentralized networks so firstly what is our definition of end-to-end uh data protection we know that a typical application data consists three part the application input data so mostly probably it from the user side use the input data the application embedded data and application output data normally it refers to the application result right so our end-to-end for protection means a protection for all these application data all these application data stays in the encrypted status during the whole procedure of running application so the corruption only happens inside a high secured sgx bubble which is also called sgx enclave and cannot be accessible from outside world so from outside world everything happens in this sgx enclave is in the encrypted status take example i have a fan tag application right i want to run it on the uh blockchain based decentralized networks and this fan tag application requires some user input data so from my side these user input data could ask me some about my say bank account information my privacy information so i definitely don't want these my personal information to be just diffused to a decentralized networks so all these user input data has to be encrypted before standing to the decentralized networks to feed the application execution and at the wrong time of the application this application runs in the in tow sgx enclave so from again from outside world everything happens in this sgx bubble is in encrypted status and finally when the application finishes running it will have the result and the result is also encrypted inside this sgx enclave so finally only the corresponding user who trigger this application is able to download an application a downloaded application output and only this corresponding user is able to decrypt the application output so this is extremely important for the decentralized applications which contain some sensitive data and another great use case the data monitoring say if i am a data provider i would like to make money by renting my data to you to feed your application i definitely don't want you just copying my data and the result of someone else right so the end-to-end for the full data protection is really a essential requirement in such a contest and use case so we talk about end-to-end for the full data protection and why it is so important to our isic platform and blockchain based cloud uh base blockchain based computing so we know that the core of a blockchain is its decentralization which means the data that the application is running on the decentralized networks right so a question a question is naturally raised does decentralized networks mean trustless to some extent yes think about the the legacy centralized networks think about the cloud provider how it works so this is a centralized entity or the cloud provider cloud administrator who is there to deploy the sophisticated security mechanisms just to protect the applications wrong on their on their centralized networks right but for the blockchain based decentralized networks there is no such administrator who's there to protect the applications running on the decentralized networks so without the end-to-end data protection all your application data are just exposed to the millions of decentralized node so for our isic platform so basically our isic platform we offer a blockchain based platform to trade computing resources so basically if you are a server provider you can join our platform to monetize your server service if you are a application provider you can join our platform to monetize your application service and it's similar if you are a data set provider since our platform is blockchain based so all these data as applications running on the decentralized networks so thanks to this end-to-end protection the application and data service can then be monetizable in a secure way via our isic decentralized platform so the full data protection and trusted execution is a mast is a mast for isic platform and blockchain based uh blockchain based computing okay so uh i just talked about what is end-to-end data protection and why the end-to-end data protection trust execution is so important to isic platform and to blockchain based computing so here is our solution our solution which is based on end-to-end technology which provides end-to-end data protection for isic platform and blockchain based computing so our solution firstly provides a full data protection covering application input data application invented data and application output data so protection these application data is running on the decentralized networks secondly our solution allows to make sure the corrected applications are correctly executed and the execution are neither tampered nor interrupted by any malicious attackers so basically we assure user in the perspective of user side we can assure user that two points the first point well this is actually the expected application the correct application that is actually running the second point these applications are correctly running it's neither tampered nor interrupted by any other malicious attackers and our solution is based on scone framework so uh we currently uh working closely with scone team to push forward to push forward this uh our sgx solution based on the scone framework and our solution is also ea compatible which leveraged ea trusted computing specification so which was just released several days ago so uh and this ea trusted computing specification we believe it's a milestone for the ea serone community to support the trusted computing okay so here is a workflow of our isek end-to-end trusted execution uh with into sgx so principally it contains three steps the first step focus on the user input data encryption so remember what i talked about the end-to-end data protection so if basically if you want to trigger a decentralized applications so firstly user input data has to be protected has to be strictly encrypted and protected so the step one the user uh the user input data is strictly encrypted at the user side with the user generated secret kings and the user the encrypted user input data can then be transferred to the remote file system and the secret king can also update it to the secret management service which is also sgx based and for step two the user is able to trigger the off-chain applications running on the decentralized node so uh uh this user can trigger this off-chain computing via our isek marketplace which is also based on the blockchain transaction so as soon as the off-chain application starts running after the decentralized node sgx bubble is automatically created to protect the running application so firstly the application will retrieve the user encrypted data from the remote file system and then uh application will also pull the secret kings from the sgx management service via high security channel so uh which is called sgx provision channel and then the the secret king allows to decrypt the user input data inside this sgx enclave and the decrypt data is used to feed the application execution and when the application finishes running the out port is also encrypted inside this sgx enclave and a execution attestation is also provided inside this sgx enclave so the step three finally the user can just download the application out port which is encrypted inside this sgx enclave and i want to underline that only the corresponding user is able to download the output and only the corresponding user is able to decrypt the application output so uh i would like to show a demo let's see okay okay that's all so uh basically this is a let's start okay so uh actually uh this demo is based on the 3d rendering blend applications so this demo uh can show you show the user how to use our sgx solution to protect protect user input data and output data and provide a full data protections so firstly user only needs to run a simple command the isic te init which allows to initialize the sgx project and then the user just to copy his user input data to a specific specific folder which is the te imports and the user runs a very simple command which is the isic te encrypt approach so basically this command of course represents the step one of our workflow so basically which allows to encrypt users input data and push it to the uh a remote file system so uh basically uh the the uh this commander returns output which contains some parameters like the url pointing to the remote file system storing uh the user's encrypt data and the user just do some uh several uh simple configurations with this url uh because this url well are used to uh to feed it to the second step so basically uh the second step is that user want to trigger the off-chain uh applications running on the decentralized node so a user can uh can based on our isic marketplace the user can choose available sgx workflow and when as one of the user choose available uh sgx workflow he can trigger the off-chain applications via our uh simple command so which is isic order field uh user can also use the option watch uh to monitor in real time through running status of the application execution and the user can also use the option download to download the application output as soon as the application finishes running so you they can also uh monitor monitor the uh status of the application execution via our ui uh interface via the isic explorer or uh gafana so uh so for the envelope here the running status of the uh application uh so the blockchain transaction takes some times so finally as soon as the applications finishes running so uh the user only needs to run a very simple command so which is the ivy decrypt so this command allows to uh decrypt the application output and push the decrypt output to a specific specific folder which is names the uh te outputs so here we can see that the uh this application automatically decrypt the decrypt outputs and push to te outputs so the user go to te and outputs and see the this is the output of the of the application without well so uh i would like to analyze that only the uh so the corresponding we're running a very late on time so i think we'll hold off any question then you can catch lay after the meeting uh and thanks lay uh so so we we wanted to also like you know who try to hopefully make a little bit more interactive and we'll have a short panel with donwood lee our hope is that uh you know we entertain you perhaps by our disagreements um and so i have a set of questions i can ask them but if there's any questions that are really you know burning inside um you know you and from the audience can also ask them and um i think you know we should probably start by introducing ourselves so i'm tom willis i'm at intel um i'm uh one of my i'm uh a board member of the ea and i'm also a director in the open source technology center thank you uh tomas pertani i'm the CEO of oracle eyes uh we are the most widely used oracle service on ethereum right now on the main net and we have approximately 1000 contracts smart contracts using our oracle api which uses t to secure the authenticity of data andrea's friend from consensus um my role there is the blockchain swiss army knife ccc for golem uh and uh our aim is to create user controlled cloud essentially marley gray microsoft yeah sunday okay so my first question for the panel is we just released this um trusted compute specification from the ea why is why is how's this help and why is it important anybody want to start that answer that question okay okay uh so the thing is like the reason we uh like one of the things like you know the ea came into the uh being is to create standardized interfaces interoperability between clients implementing all kinds of various types of uh say uh scalability solution that things like that as part of that uh we uh uh notice that there are different ways the off-chain compute based on you know tees and other kinds of zero knowledge and mpc can be also very relevant in this space so the idea was to create a uh a standardized interface behind which all these implementations can uh sit uh seamlessly and for a smart contract developer uh whether they are talking they should be able to use a common interface to uh without having to worry about the implementation of this off-chain trusted compute yeah if we can add something i think that this effort is really important because right now um whoever uh like us is already uh like in looking for something like this uh has basically to design its own solution um which can be hardly compatible with the competitors of course so having someone like the Ethereum Enterprise Alliance uh pushing for like a common uh standard interface uh helps to understand better the the needs of everyone um in you know a neutral ground um and having a standard like this going forward will possibly also try to find a common ground between the enterprise needs and the uh you know public chain needs because right now they are very different but we expect in the future that they may converge uh so um yeah we are totally um like um supporting this and looking forward to keep contributing to the next specifications as well okay next question um we've heard a lot about decentralization here at this conference does trusted compute help decentralization or hurt decentralization or does it depend how we implement it so um maybe i will focus on SGX especially SGX unfortunately even though it's exciting technology it really is but uh as of this in a current shape with those two centralized uh creations one of those is remote attestation which let me stress it one more time requires centralized intel operated intel attestation service AIS without IIS without contacting IIS for every execution of enclave of a new provision enclave SGX is meaningless let me stress it's meaningless without AIS and AIS is in full control of Intel it's a centralized point which nullifies all that the centralized attempts that it tries to help another centralized point which is often um which we often missing this is the Intel's control on the launching of enclave which Intel has recently made some steps with flexible launch control which is a good step but it's still as far as I know no hardware is available very few details in the spec there are some corner cases that are not clear whether it will actually make practical sense or not so two things centralized AIS and centralized policing on launch launch enclave on launching the enclaves it's it's very problematic for centralization SGX you know it's like MDE trust zone MPC CK CK proofs decentralization is not going to be achieved by a single of those technologies only in combination will we achieve that and only once we can reliably push out trust compute loads to every one of you right that's the that's the promise of of decentralized trusted fog computing because in the end if you look where things are going especially around AI if we don't have trusted compute at the edge everything is still going to be run by Google right so at that point who cares right so having the interfaces the specs is an important step in the right direction because now everybody can can can build towards that towards that that spec and make their their services available I really encourage everyone or someone out there who wants to start a marketplace for compute services using laptops go for it I'll support you otherwise the rest focusing on one technology versus the other it that's that's a that's a side show it's completely unimportant so I just want to add one thing is like when I started to engage with the Ethereum community close to like you know year and a half and I quickly figured out the two points of that we have centralization are we I would say had as Joanna was talking about this you know the IAS thing and as well as the flexible as well as the enclave launch control so what what I can share is that we at Intel looked at that and and we have worked out the solutions for those things because you're keeping Intel in the middle of those kinds of things you know generally we believe is we want to go away from that so what I would say is like stay tuned in a very short period of time we will be you will see like you know how Intel is basically making it you know getting getting out of the way so that you control the enclaves on the hardware you control the attestation and you control the launch hello I'm Przemeck Simeon from Santander we are part of EA John my colleague should be on the panel he's in London now so let me just give you the motivation an enterprise has huge IT operation off chain like my bank has 140 million users and if we want to have also operation on chain whatever however small the critical thing is to bridge them so anything that helps us to bridge on chain and off chain operation is a godsend at this point I think that the reality nobody has said anything about is that T is will never be decentralized because decentralization implies that you have a deterministic process and it implies that you wouldn't need a T in the first place so the using a T using a trusted computing solution and isolating a process assuming it's safe is like cheating we don't have anything better we don't have mathematical proof or a consensus model which is convincing enough so we need to use the T because the T if you assume it works correctly will do you know whatever we need to do in a way which we consider trustworthy so what I think we should avoid doing is relying on a single technology because other than the two potential point of failures let's say your trust lines we have with SGX which on as mentioned there is a third one which is in common with everyone not just with the interest checks with any T solution any hardware backed solution which is that we don't have control of the manufacturing process and we will never be able to verify in-house that that chip is actually matching an open hardware spec that we typically don't even have so in any case there is a trust line on the fact that the T is doing what we assume it's doing and we'll never be able to verify that so what we are currently doing to try to overcome that is saying okay that's fine this is something that is impossible to solve so let's just use more than one technology so that we know we are trusting Intel but we in this way we spread out the trust and as long as we pick some attestators which don't have any interest or any conflict of interest if they are competing for example then the proof you get is stronger like if we get the Intel proving something in then we get Microsoft proving something else maybe the same claim right and then we have ledger and then we have google and things like that you understand we have like four players which are competing and have a huge reputation at stake which are claiming the same thing so does the T help yes it does but it's not enough that it's always a trust line anyway so it's not decentralized and we never we okay great um that was entertaining so uh next next um question is um trusted compute are there any new we we don't like to make a living right and uh is there are there any new um business opportunities that emerge because trusted compute now can be used with ethereum marley has the answer to that well yeah yeah well we think about you know i think it will enable the the ability to create them um so you know Microsoft will try to build the platform to make it possible to create the killer application um uh so you know the the thing is is when you start to lay these layers of infrastructure you you start to like we're just mentioning that you need to trust in depth security in depth and you're going to have layers of this trust and we're going to have single points of failure in places and um the good news is we're progressing in that that direction uh but while that's going on people do need to make money um and you need to be able to build solutions rapidly uh and get them to market and also do it uh cost effectively um so you know as we go around and start telling customers about you know what we're trying to do uh with a platform we're not trying to compete with anyone we're trying to lay the groundwork because we think this is super exciting um from an application standpoint we honestly don't know we've heard so many ideas and sometimes so i can't share some of them that we have uh active customers in development building applications that are intended to make lots of money so i'm just saying not necessarily cryptocurrency but that will come as well yeah if i can add on this um Oracleize has been currently serving approximately one million paid requests asking for TEE back data um so there is like an opportunity to make money here especially because this TEE these these actual physical devices they are not something he is easy to find at the moment list in the cloud other than interest checks um if we assume we will want to use more than one some of those are not even designed to be used in the cloud yet um so yeah i think that at least for the next two or three years there is definitely this opportunity and then other opportunities will arise once this uh like market will scale up and we actually see where where it will be going exactly so i think um there's an interesting conundrum on the um for the for um the trusted compute enablers if you look at the AI space right um the biggest enabler of AI applications nowadays is Hortonworks cloud cloud era there are what three billion in market value right the AI application providers what like three trillion in market value so we're looking at like a thousand x right difference um so enablers so that what what Marley kind of alluded to is like yeah you need the security down the sack right but the security down the stack ain't going to make you money in the traditional way right so the interesting question is can we come up with um novel collaborative um business models that allow revenue share that combines players across the stack so from the enabler infrastructure all the way to the application um and distributor level that you can create novel solutions that solve currently intractable problems that are high value for the end consumer and that they're willing to pay for and therefore everybody gets a um gets a cut um so rather than having the ones down there be a thousand x lower than the ones up there you spread the wealth um a little bit and incentivize therefore right um collaboration because there's there's more to to um go around uh okay my next question is what is trusted compute allow ethereum to do that it can't already do if we look on mainnet at the actual you know really i mean beyond the hype if we look at the concrete use cases of smart contracts today you have a big chunk which is decentralized exchanges which typically don't need an oracle in most cases uh so they don't need any external data or any te supported data um um and then you see gaming and gambling uh those ones typically do need um something like that so i think the the biggest use case in production on the public chain today is providing randomness like generating some external randomness which where miners will not be able to collude with that um there are others like price feeds or like insurance products many others like we have seen a bunch um but this is still this is still having a limited traction on test net there is much more variance on private chains i'm sure other people on the panel here will be able to share their experience um there are so many different use cases but the reality is that it depends on the context and randomness is something huge on the ethereum mainnet on the public chain while it's probably something really small or negligible on consortium chains so i have a comment is somehow related but it's about uh that we keep the um um some good view on on on how much we can really get from those various t technologies versus how much we should be reserved about them because we all talk here about all cool use cases how it solves all the problem we treat those sgx enclaves or other enclaves as magical black boxes protecting our payloads but in reality we always should know that they can only offer limited protection and for example i quickly looked at the ea05 spec this morning and at no point there was explicitly stated that we should always probably give some time limit on the longevity of secrets that our enclaves process and because of various things because there are attacks like foreshadow but also because there are and there will always be side channels which means that we cannot be so much so excited about using good for just protecting any kind of secrets but more likely only very short-lived secrets which will likely limit the amount of usable applications quite significantly i don't know i haven't seen a faraday cage in a in a microsoft data center yet but you know what what you you you you you made a successful side channel attack in a starbucks yeah oh no no all right now let me comment this so you make my my anger argument because saying that there is no cages in in data centers but data centers running by somehow perhaps willing to trust microsoft slightly more than just a random person running some ethereum node it's a node okay running by whoever and we so we should be very cautious about trusting this whatever node a sgx node because this is not just as it's not like the cage okay it's it can maybe protect us for maybe five minutes maybe five hours our secrets maybe five weeks i don't know that's probably we need some kind of metric to do that so that our applications can be designed in way in a way taking this into account so that's why you need to try to hack a at starbucks to to see what it takes such that you can actually give those metrics you need to run experiments i just want to say one thing is that uh like anything in uh anything in software uh it is uh uh till it's not broken it's not uh you know uh at some point in time it everything will be broken and even when we talk about this whole you know crypto economics based uh solutions they also have their parameters between which they work and beyond which they start to fail uh you know how much should i have on stake and how much should i get penalized and things like that so the with the point i want to make is like there is no one solution that is a silver bullet for everything out there depending upon the use case it is a mix of a solution you could think of a t based t combined with some kind of a crypto and economics so there are these kinds of permutation combinations that we have to think and always think about you know uh you know defense in depth and not have one point of failure anyways that's not a good design in general so uh t's have a role to play in there but uh knowing where you uh where you should apply them and when you should be basically have an alternative is part of the system design i'll just echo one last thing that was a good point on the time limited the secrets um so that's the the thing is because we start to do these things no system is riskless um uh risk tolerances you live with it everybody knows how to do this if you had zero risk tolerance you would get a bed any you wouldn't use a toothbrush especially electric toothbrush you would lose an electric razor wouldn't use a razor at all because collect yourself right so but it's a continuous thing and you're pointing out a great um attack vector that we probably didn't think about was how long and should we let these secrets survive i mean we at microsoft we rotate keys how many microsoft anybody else left in the room for microsoft you have to rotate your keys all the time and it is painful i mean it's like when we have to rotate it rotate our keys on our systems we have to prepare for two weeks when we get it down to a rocket science almost a process we can't just hit a button but when we add the tees it gets even more complex especially with we talk about blockchain keys so we really need um and the ea and that's the the greatness of uh well the benefit of the open standards is we put some sunlight on these things and we're going to get great feedback from collaborators that really sort of been there done that and no you know you might not know there might not be an attack vector for it now but who knows there might be a long lived patient uh patiently planned attack vector that will um if you have too long of a life of your secret it will be a ticking time bomb okay last question what should we add to the next release of the trusted execution trusted compute spec okay that's that's the question about how about it what i would like to see in the next sgx spec just to clarify um so the centralized remote attestation just nothing fancy just do it like like like we had with this gtpm the old times everybody could locally verify the quote for the tpm that's really easy you can do that um second thing make the claims about what tradeoffs a flexible launch control will will imply for example will i still be able to use intel squatting enclave while having my custom launch enclave uh nowhere in the spec this is explicitly written will it work or not i would like to know that because if by using my custom launch enclave which will not force me to have a legal contract with intel to start my own enclave so if having my custom launch enclave will require me not to be will prevent me from using intel squatting enclave then so meaning less unfortunately uh second our third uh is increase memory for enclaves in golem where we want to run lots of payloads words of lots of complex payloads we already run for example we can run blender in in enclaves it runs pretty well however if the scenes goes really really large the problem of swapping memory to diram really hits the performance so ideally we could have that the operator of a node could have a slider how much of the diram should be devoted to ebc the enclave total memory and finally it would be nice if you also provided some secure paths to devices in general not just to intel integrated graphics which i know you already have as as as the ledger presentation told us unfortunately i never found the public spec that that would be allow others to use it but we would like just to any device especially to more advanced uh gpu's that implement kuda so we could allow people to use or to rent a gpu power for computations so those four things liberate ias the centralized remote attestation point um specify what are the condition for for custom launch enclaves increase memory and secure path to devices just for things i'm slightly confused were you was your question around the the api spec or okay so i was like what um yeah no no i know it's like it's like it's your it's your it's your wishlist for for intel i think sanjay heard you loud and loud and loud and clear and and it's it's it's his new kpi is for next year right um action items yeah no i'll i'll i'll talk to your boss about that one um i i i think for the for the spec i i think as far as an api spec goes is we we need to add a few more specifics that are um directly related to zk proofs and mpc um sort of like more that are that are uh that are more specific around that as we have currently for for uh te even though it's just for what for sgx we that's next iteration that i think we should add um i think what we need to do in the working group is um a best practice uh document around um how to how to create and deploy trusted compute um whether that is is so so if i want to if i want to create a zk snark uh um compute service right what are the steps that i need to follow what are best practices if i want to do it for te however if i want to virtualize that what i need to do right so these are i think that is is beyond the api spec is really starting to talk about what are best practices what do we need to do how can we implement zk zk snark effectively what are the the best practices for the for the for the type of of re's to be used etc etc i think that's that is the next big workload for us um i i think there are two main pieces that i would like to see in the spec um one is uh it was already mentioned by sanjay i think um a better compatibility and cost efficiency um we when we apply that api on the public chain because right now it will be super expensive and in practice it doesn't really make it viable for the public chain um so i would like to uh like have or so either some kind of variation uh which works better for the public chain like a minimal set of features or something um or just a new iteration of the spec which is more cost efficient there and the second thing is like trying to understand uh better if the current spec is um as uh intelligence checks are agnostic as it could be because i know that there are already it's really quite agnostic in some parts it says it could be something different than intelligence checks you have to specify it like this and so on um however um we we have already mentioned like open enclave sdk for example and also google is working on asilo so there are those two projects and uh that are pretty much both trying to abstract out the enclave specific logic i would like also to understand um if the existing api in this spec would work well with either one or the other or both so so yeah i i think that was one of the purposes of this whole thing is like you know it's at point five for a reason so we wanted to get out the stake uh get get to start uh put something out there so that people can react to provide input and create something that we as a community feel that it meets our needs uh and and what i would say is that uh uh it would be great uh you know there are a few of us who were you know pushing the spec till now uh more of you can join and we can get all of these perspective in uh into the spec and uh that that's that's really the objective is it's not to be specific to a technology but something that the community can use okay thanks very much the panel and thanks to the audience