 Thanks for coming here late in the day. So we have a packed session, so we'll get started quickly. So let's first, because there's a whole bunch of us that didn't work on their phones. So let's have a quick intro from the score converse of the team. So you guys want to start? Marley Wright, Microsoft. Andreas Freund, Census. Jim Zang, Collider. Virginia Armageddon. You are in need of a challenge. Laser Isaac, Microsoft Census. Joe Schanke, I'd like Sanjay from Intel. So as you can see, it's quite a cross-company effort. So today we're going to talk about this off-chain pursuit compute. This work started off almost two years ago, and a bit in the context of enterprise experience as an option to address scale of human privacy. So what I will do quickly is give you some little bit of history and then drop into the immediate details of it. So we were going to show, actually walk you through some hands-on demonstrations, but the internet access here is very interesting. So we're going to just walk you through and give you enough pointers on where you guys can go and try it out at your convenience. So there are going to be two of these walkthroughs that Dean will introduce, and then hopefully we'll have enough time to have some case studies, three case studies that people have put together around how they are using trusted compute. So quick on enterprise Ethereum. The focus is how to apply Ethereum in the enterprise world, and the main challenge is when it comes to enterprise, we call them 3Ds, permissioning privacy and performance. So we looked at, you know, performance and privacy are two of the more important ones, and we looked at, you know, your trusted compute as an option for solving those kinds of things. So we have made a lot of progress on that one. We introduced it last year, 0.5-spec at DevCon 4, and now we have a 1.0-spec out there, and reference implementation is also available for people to play with, and we'll walk you through some of that in the future. Yeah, and as I said, some of the key salient features of this spec are privacy, you know, confidentiality of the data as required from the enterprises, industry or actuals, and also support for things like decentralized identity. It's a pretty neat spec. It has gone through a lot of work and, you know, more than welcome to take a look at it and see how it fits into your businesses. And the thing is that right now the spec has, when we talk about trusted compute, there's a lot of the PGE in it, but what we have in there is supporting other forms of trusted computation, right? ZK-Proof's multi-party computation and things like that. A lot of that is in there, but a lot more work needs to be done. With that, you know, I'll hand off to me. Hello, everyone. I would like to say first thank you to all the collaborators. It was truly an amazing journey. It is the first time that the EEA has produced shipable code and it is the largest enterprise collaboration in the blockchain space, period. There are five companies that have worked together to produce this EEA trust reward token. This is the goal of this token to incentivize participation of EEA member organizations and their employees. Why is that important? Because it is very hard to get people to contribute. It's always the same usual suspects. The goal is to apply some of the learnings from the open ecosystem space around incentive models to see whether it can be leveraged in the enterprise. There is experimentation with different token principles and design templates. We're eating our own dog food here. We want to create synergies with other EEA groups, so the test net group, the token taxonomy initiative that Marley is having, and the new technical token working group, as well as open Ethereum specs, so the ERCs, Mike will talk about that, and then web standards, W3C. We're actually using the W3C standards for DAEs and Ferrafabo credentials. Mike will also talk about that. This is really the use case for the trusted compute framework spec and the trust compute test net. That's really important, and we're creating a first simple use case that is actually usable, and that is also a first. Enterprises can actually start using tokens in their days of operation, so this model is easily transferable into one enterprise or between multiple enterprises and other consortia. With that, I want to hand it over to Marley to talk a little bit more about the details of the token model. Thank you. What we wanted to do was to figure out the business process of solving the difficult problem. How do you get consortia comprised of multiple corporations, which might compete with each other to actually lean in and do the work to contribute to the benefit of all, which is pretty difficult to actually pull off. We wanted to create an incentive, I like to call it behavioral modification protocols, to incentive good behavior and punish bad behavior. You also have to, because we're trying to scope out work that needs to occur, you need to do this in a way that you say it's the potential to earn these rewards. We'll talk about the tokens, and then we'll talk about how we actually grant them really quickly. First off, we do these. We essentially create a grant, and it's the potential for an organization that's given to the organization to receive X number of reward tokens if they achieve the goal laid out in the grant. That's the potential, and they can have individual contributors. The actual people that are doing the work, we want to actually get the credit for doing that work, be able to use it so we can list who those people are, and they can get a percentage of that total reward if it's cheap. There's also a punishment. Well, the punishment may equal the sum or negative value of their total reward, and you can set that in the grant as well. And then there's also something called a reputation token, so we'll talk about that as well. Those are granted. Those are one-for-one with the reward token. Essentially what the reward token does is it's a token that organizations can, when you are granted them, let's say you're the reward grant vest, and you're awarded the organization for one of those tokens. They can take those tokens and redeem them for value of some sort. We'll talk about what we're actually providing here in a minute. You go to the next slide. It's fun. So you can go out and shop at the EEA's rock store and get some really interesting materials, some t-shirts. Dinner with Ron. That doesn't mean that Ron's going to pay the dinner. He's going to have Ron come with you. He's going to have the EEA pay for the dinner. But this was more for just the show, what you could do. One of the things we wanted to put in the grant as well is to have, because it's consortiums, we made up organizations, we might have businesses like Microsoft contribute to have a bounty in there and say, hey, we're going to put up some services or some Xboxes. So the contributors actually need the goal. They'll actually get something they want at the end of that goal. So really, we wanted to leave the model open to say that you could reward them. So interesting things about this is if you get a reward token, the reputation flows to the individual. So the individuals will get a percentage that there's 100 tokens, and there are three or two participants. You can divide up those reputation points. So if it was equal effort by both participants, then they'll get 15 reputation points each. And the cumulative overall up to the individual. Piddly tokens essentially are negative value. They all have to be redeemed before you can redeem your reward tokens. So they'll decrease your overall balance for dictating whether or not you have to settle for the autographed sunglasses that I provide or even, you know, they'll go slumming down with a t-shirt. So this is how these tokens will work and create this business model for doing that. So let's move on. So I think now what we'll get to do is, like, you know, the real meat in terms of, like, you know, how this trusted token was implemented and why the trusted computer actually all fits in and everything else. So we'll just do that. Good afternoon, everyone. Since early last year, so Isaac has been significantly working with Intel, consensus and the other EA members to work on the trust computing framework and move forward the related specifications and the trust use cases, just like the EA token. So firstly, before talking about technical details, I would like to take two minutes to talk about a phenomenal question for me. Why we need actually upload the on-chain execution to the off-chain trust execution environment. So nowadays, there are a lot of on-chain applications that are smoothly running on the on-chain board to support a great number of interesting use cases. But for some of these use cases, we are just starting realizing that there could kind of be limits. Take that. What about if the execution is becoming, needs to be upgrade frequently? So if everything is deployed on on-chain boards, it will be costly, right? And for some other use cases, some execution logic could be very, very complex and sophisticated. And it is costly. And if it's only on the Ethereum virtual machine, probably it's not so efficient if the application becomes so sophisticated, right? So we have an idea. Why don't we just migrate this execution from on-chain to the off-chain trust execution environment? So if the application runs on off-chain, it works. So whenever you want upgrade applications, it just represents nearly zero cost. And the off-chain execution can hold sophisticated applications with incremental efficiency. What's more, in off-chain networks, all the execution logic can be coded in almost any of the mainstream languages like Java, Python, JavaScript, or Go, even COC++, right? So, and what's more, the execution running in off-chain networks, it offers better performance because it's just executed once based on the compiled code. It allows to very seamlessly talk with external data source based on the corresponding API calls. And one of the great features brought by the TE is that it offers end-to-end privacy preserving. Actually, all this data could be strictly encrypted. The decryption can only happen inside this TE enclave. So the privacy is guaranteed by the hardware level security. So here's a diagram that you can see that originally we can deploy smart contracts and trigger smart contracts via the on-chain network based on a project here, which is used to sign the blockchain transaction. And now we could migrate this smart contract execution logic to an off-chain trust execution environment enclave. And we only need to transfer this private key to this TE enclave via a high-speaker channel. And after applying the execution logic, a blockchain transaction can be transferred directly from the TE enclave in the name. So this blockchain transaction is triggered in the name of the owner here because only the private key is transferred to the TE enclave and no one else is able to inspect and tamper this private key. So in this way we can offload the execution from on-chain to off-chain networks without sacrificing the user experience, without sacrificing the trustiness and security. So as I just mentioned, that's a great idea to migrate the execution from on-chain to off-chain if, for example, if an execution is to be upgraded very frequently. So that is the case for TE token execution logic. So here is the architecture diagram of the TE token, which is composed by four principle parts. So here are the UI parts, which allows the user to find the TE token request. And the TE token request could be then captured by the TE listener model and the TE listener model could then trigger the TE SGX applications. So which represents TE, EEA token execution logic runs inside the SGX TE enclave of SGX enclave which is deployed in the Microsoft Azure SGX virtual machine. And after applying the execution logic, the blockchain transaction can be triggered from the TE enclave directly in the name of the TEA administrator. So this, as Andreas mentioned, this is a great cooperation work from the different EEA members. So the efficient blockchain takes charge of the implementation of the UI part and Isaac takes charge of the implementation of the TE listener as well as the TE SGX application implementation and deploy the trust computer framework to Microsoft Azure SGX virtual machine. Isaac also takes charge of the implementation of the TCF TE contract and the consensus takes charge of the implementation of the EEA trust token smart contract. And Clio takes charge of deploying the blockchain networks. Actually, the blockchain networks is based on Clio network and based on the high-level districts of production in the passion. So I leave Mike to talk about the smart contract implement by the consensus. Thank you. So in the smart contract layer, we have three components, the one tokens, second, our centralized identifiers and the third one is going to be verifiable components. So how we make this ecosystem work is we deployed these three contracts as an operator contract on top, managing the contract's admin functions, such as burn, mint, transfer, et cetera. So to be able to do that, we need to make sure that we have the participants that are receiving the tokens are one employee's organization and two, the organization is a member of the EEA. And that is where the ERC 780 and the 1056 come into play. The ERC 780 is a contract, it's a claims-right-shoot contract originally developed by U-Port and we've adapted it for this use case and essentially it's a contract where we allow the EEA admin to specify which organizations are members of the EEA. And then the second contract is going to be the Ethereum registry or the 1056. That's also a contract standard that was developed by U-Port. And it's going to allow an organization to specify who is an employee of the organization. So this is not an EEA admin function because we don't want that function. We want the organization to self-specify who's working on their behalf or their delegates. So the function we're actually really interested in is the set delegate function in the 1056. So organizations can actually set the employees as delegates working on their behalf so that when we mint tokens, the EEA admin when they select mint tokens, it's going to check, one, is the organization in EEA member through the 780 and then two, it's going to check if the employee is a participant as an employee of the organization under the 1056. And then from there we mint their tokens based on with the amount. So this is going to get increasingly more complex so let's just bear with us. Before we step through this chart, let's use an example of why are we bothering doing all this stuff? That's the heart of the trust compute. So who knows about ERC 20? Right? So we're talking about tokens, non-fungible tokens. So ERC 721, okay? So fungible tokens and non-fungible tokens. I run those in machine, everybody knows whose they're going to and I have issuer that has the secret key and sign them and push the request and you get the token. What's the problem? Why do we need all this in the middle? Let's use this example. Now let's say Ron is the holder of the secret key that initial tokens mint the token to organization one so they get some reputation tokens, right? And let's say, well actually Ron looks uncorrupted so that's not going to work. Let's say Charles. Charles is the holder of the private keys and I know Charles for a long time so I know he loves exotic whiskey, right? So we're going to make a thing. Charles, for every token you issue to Kaleido, I'm going to give you a case of whiskey. So what this means is as long as I can corrupt the secret key holder I can get all the tokens I want, right? And there's no safeguard to make sure that transaction is legit. You can only trust as much as you trust the person that holds the private key. So what we want is some sort of safeguard to make sure when the printing request is being issued it's issued by meeting a bunch of criteria that's set in place. Okay, now you think about this. We need criteria. Why don't we just put them in the smart contract that stands you from the ERC-20 and does the validation and then calls ERC-20, right? That's easy to solve. It can be solved this way if all your checking can be done on the chain. If all the data is available on the chain, you can do this very easily. There are a lot of situations where the checking needs to consult external databases, look at the logs, look at the meeting minutes from administrators, and do the validation from outside resources that's not reachable from smart contracts. And that's where this kind of solution makes it very handy. So let's step through this and see how this works in the program. So before this can flow end to end, we need to do two things. We need to make sure the secret keys are in place. So first of all, you have the front-end and you have the blockchain. And then somebody needs to hold the signing key that has the special power of meeting tokens. So that's going to exist at this layer we call trust and compute listener. This is basically an API service that makes the system work together with the front-end. And then this is going to call some trust and system that you can say, when this system is executing some logic, I can be somehow guaranteed and convinced that it's running the right logic. And the magic is in the Intel-based technology called SGX. Same software, guard extension, thank you. So that you can be convinced that when it executes some logic, you know exactly what the logic was being executed in terms of getting a proof or attestation from that component. Now, because of that, we know that to make the token, we have to have participation records issuing, let's say the organization participating. GitHub commits that contributed to the publication of the next step. That sort of thing can be checked here and all the logic can run and they can call any APIs they want, they are on-chain or off-chain, because this is just a regular component that does compute. So that's the beauty of this solution. Once we've got the keys in the right place, the admin, Charles in this case, logs into the UI, and then says, okay, Clydo did some good work, let's give him some tokens, send a request that's going to be accepted by the trust and listener, and the listener now needs to talk in secure mode so that this request going to the trust and compute component is going to be secured by public encryption, so only the component here can do that. So now the workload goes to trust and compute, it's going to do the logic and checking and eventually once it passes the criteria, finally the token issuance request is made and tokens issued and everybody's happy. So that's the flow. This is only one pattern of what you can do with trust and compute. There's other like Oracle patent patterns where you can use it to get external data and you can be assured that once it's grabbing data from external resources, they're not manipulating them. So there are very sensitive data in the world that can mean millions and millions of dollars, even if you tweak it just any bit. When you are getting that data from this kind of components, you know they're not manipulating them. So what we have done with the EEA token project is if you go to this link, you will see all the components that's involved to make this work. Mike already took you through the smart contracts. It took about 10 different smart contracts to work together, so take your time to understand what each of them do. And then the TE layer that my exact folks did has two components. One is the API layer that takes the request from the UI and then that launches the worker node that actually does the processing through a secure channel. And finally, the Envision blockchain folks did the UI work to make it integrated into the EEA member website. And finally, Marley and his team did the modeling to make sure the stuff that we do is compliant to the TTF spec. So you can check out the stuff from the repository and if you have any questions, feel free to open the issue and we can engage. To help folks understand that this is real and this works, we did a live deployment that's running in Azure in Clido so that all the components outside of the blockchain is deployed to Azure, including the SGX cluster. And then the blockchain itself is managed by Clido, which is also on Azure. And then if you want to try this yourself, how these different components work together on the cover, we also created a local environment that's based on dark and dark components. So just following the read me, you can try this yourself and run everything end to end without relying on a cloud component. So that's why I'm trying to take you through in the next, I don't know, hopefully 15 to 20 minutes of stepping through all the different components so you can see this thing running at the end and let's get to that now. Any questions so far about any of the stuff? There's one at the back. Okay, yes. This is a nice little project. Why not just use GIT coins? GIT coins? Why not just use GIT coins? Oh yeah, if somebody, anybody is welcome to create a bounty project and see how people can improve on top of this, right? What we've done is one pattern, again this is just one pattern, you can do it with TCF and there's other patterns you can use for that. That's a great idea. All right, so before I go to the code, this is sort of the simplified version of the local environment compared to the real version that's deployed to the cloud. We have these components that already talk to you about. At the heart of the Secure Compute component is a secure channel between the component that's owned by EE and MIN and at some point that needs to call a stateless worker node that can serve anybody. So how does that communication happen securely is through a secure channel if you want to know any details, ask these guys. But obviously we don't need that for a local environment, so what we do is using a shared file system. We have something generated by the TListener that's committed to a file system and this file system is shared by the worker node. So that's how the secure provisioning is happening between the two. And then the blockchain itself is a high-resourced VESU running low-volume through a darkened compose. Right. Let's take a look. Feel free to stop me anytime if you see something that you'd like to understand more about. Here's the size of that. Oh, take this small one. Yeah. Okay. Is that good enough for the folks in the back row? Ignore this. Is that good enough? What do you think? If you tell us what it says. That's a huge function. All right, let's make it huge. We bold. There we go. Bigger, bigger, bigger. Okay. So let's do some cleanup first. I mentioned the GitHub. So if you go to the GitHub at the link, switch to DevCon branch and then you can see... Oh, pardon me. All right, never mind. We don't need that for the demo. Go to the DevCon branch and then go to the local folder and then there's a read one through all the steps you want. So if you want to try this at home, you can do that. All you need is Docker and... Okay, so what I've got is I've got three different Docker compose that puts the system together. The first one is the deep backend which is blockchain itself. It runs... The latest version, I think, which is 1.2.4. For every Bessu node, we need to run a companion component which is essentially the wallet because we're lazy, we don't want to do external signing. So we just ask the node to do the signing for us and this is giving the Bessu node a wallet. So that's what Ush designer does here. Then we have another Bessu node and another wallet that goes with it. So that's pretty simple as far as that. It's creating the two nodes and two wallets, right? So it's all working fine so far. And you can connect to that. See that it's running for real. So there you have it. We're in the console. We can see that it's already at block 14,000-something because I've been running this environment for a while. That's why it's got a bunch of blocks already. So now we have the backend. Let's borrow the contract. So I've already deployed the contract so I'm not going to deploy it again. But what it does is... So there are these 10 different smart contracts that gives you a registry. It gives you a registry of the orgs. So when a request goes to the smart contracts to issue a token to the org, it's using registry to verify that this org has been registered. And then when you want to issue to an employee, then the claims registry is used to verify that there's a claim about the association of this employee to the org. So that's how these things work. And then the main logic of processing the issue, the burn of the tokens is operator. So that's the main thing where the logic exists. And you have the different token contracts. Under the cover, they are all based on ERC777. Instead of ERC20, it's an improved version of ERC20 for non-fungible tokens. For fungible tokens. So that's been deployed. Let's start the middle tier, which is the T stuff. So we have another Docker here. So what this does is it gives the APIs for the UI to call and then process requests. So this only needs one component, one Docker instance, because the actual worker nodes are spun up by this on-demand. When there's a work request, it spins up a new instance of the worker node and then have it process it and then clean it up afterwards. Got that. Finally, let's put the UI. The UI has four different components. It's got the HTML layer, the backend layer in Node.js, and then a Nginx for load balancing and a database to save load information. So we've got everything running now. Let's look at the UI. So you need to do a login. So the default is admin-pwd. So that's the... When you try this yourself, admin-pwd with the capitalized P is the password. Very secure. For the E.E. instance. Yes. Or you have to ask Charles. Yeah. Or follow whiskey. So the first thing you do as an E.E.A. admin is to register on the inverse, right? So let's register as a multiplication. It's an email. So this is going to call the DID registry to register this new member within the multiplication and just dot somehow. Save user. So it's supposed to call... Oh, I know what's in it. I'm using the wrong branch to switch to that column. Remember to use that column branch when you work with this. You need to rebuild the middle tier and UI tier. Bear with me for a second. All right. So that's rebuilt. You restart the UI as well. I needed to do this because earlier I was trying to connect the system to the cloud deployment. That's obviously not the right configuration. You might as again register DEF CON. Okay. Looks better. So now we've got a member as that's part of this consortium. Let's say E.E.A. itself is a consortium. We just added DEF CON 5 as a brand new member. So now we're going to ask the admin of DEF CON 5 to add something. We're going to log in as DEF CON 5 admin. As of now I have zero tokens. So I need to register some E.E.E.s that's going to be assigned to work with E.E.A. on great stuff, right? So E.E.E.1. As you can see, the email, we don't really use it to verify identities. It's just... You can give any email address if you want. All right. So now we have an E.E.E. that's part of DEF CON 5. Let's go back and say, you know, in half a year, DEF CON 5 E.E.E.1 has been participating all calls and even chaired some calls and even contributed to the next version of the spec, right? So it's time to give them some tokens so they can have their dinner they've been waiting for with Ron. Okay, so it's going to be for DEF CON 5 E.E.E.1 because they've written revolutions. So what's happening on the cover is T.E.U. Listener 1. So this is all new. T.E.U. Listener 1, which is the API server, just saw that request from the UI and then it understands that this is a request to issue some tokens. So it generated some command and then asked the worker node to process this command. So the processing of the command is done in a trusted mode, right? By the trusted component. And as a result, some reward tokens were printed. So let's see if DEF CON 5 actually got that. I think it's Gmail. Did I use Gmail? Okay, right. You got 100 tokens because you did that. So... If you didn't know what's happening on the cover, this may be underwhelming. But if you actually know that in this system there's no way anybody can bribe Charles to get free tokens, then that's a really good thing. So thank you. So that was... All of this is actually really... Thanks, Jim, for the demo. So that was in rounds in the simulation mode, which rounds every component on the same local machine, right? So here actually has something running in the hardware mode, which is for a real use case. So basically the TEE, the components, and deployed on Microsoft Azure as GX version machine. And the blockchain based on the cloud networks. So the hardware mode, here's our title for the EA token hardware mode. It's kind of more complicated. The principal difference compared to the simulation mode is focused on these 30 enhancements. So basically for the UI part it's the same. So UI can... The users can find the EA token request, and the EA token request could be captured by the TEE listener via a GWT SQL channel. And the TEE listener is in the scope of the EA administrator. So the EA administrator can just inject signed key and applied key to the TEE listener. And TEE listener will transfer this key to a secret management service, which is also running inside the Intel's GX enclave. And meanwhile the TEE listener will also encrypt the token request and push the encrypt token request to a data storage server. So all these components are staged in the EA administrator components not yet go out of this scope. So the next step is that TEE will trigger the off-chain trusted execution. So there are two ways of triggering the TEE applications. The direct model and epoxy model. So for the direct model actually TEE will query the blockchain and then, hey I want to trigger for them we just work one, what is the TEE IP address and his configuration information. Then he could with this information try from the blockchain he can then just work and trigger off-chain trusted computing. So this is the direct model it's quite simple. And for the epoxy model actually TEE will also try a new blockchain transaction so that I will run this execution for the registered worker A. And the registered worker A will always be listening to the blockchain events and as soon as he captures these blockchain events he will be triggering TEE applications. So as soon as the TEE applications is triggered a secure channel is established between the security management security management service and the remote TEE enclave. So this remote screen channel is also called SGX secret provision channel. So the signing king the private king used to sign the blockchain transaction by the E-Element administrator it can be transferred to the TEE enclave. So I would like to analyze once again that the king where the enclave is guided by the hardware SGX hardware level security. So even owner of the TEE the machine is not able to penetrate this enclave and still temper this private king. So the they also pulled to the TEE applications. So the TEE applications is now able to be created and apply the execution logic and finally can find blockchain transaction directly to the blockchain networks based on the There's a question. The question is that in this Intel trust environment the key to the secret enclave is that the machine will have access. Since he also has Roni inside Intel SGX enclave so even you don't trust this machine because all the kings are Roni inside the enclave. So actually when this machine runs when this enclave starts running it will also do with your botanist patient. So essentially you have an admin cluster which will open this secret enclave. But this one is also in the admin scope actually. I know. But essentially you have to inherit trust admin. So I think that with the consortium that each member can Yeah, good question. Actually for the token insurance for all the token insurance request it's only filed by the administration. Actually, but this for different use case for them for if an organization he want to maintain the reputation code for different employers. So this could be in the organization scope. So for this token insurance it's only for the administrator scope. So the blockchain can be triggered to the blockchain directly based on the based on this signing key. So in the name of the EEA administrator. So in this way that we can see that we can off chain all the execution from on chain to off chain without compromising the security and trust base and the use experience. So like this is like a real world application that we have deployed as part of it's an EEA spec compliant implementation first of its kind. And hopefully we'll use it within EEA in the real way. Now the other part is like we talked about this application. The second part is like Eugene will walk us through some of the what does it take to write we talked about hey you can implement this thing in the STF so how hard or how easy is it. So he's going to walk through this framework that we have put together and that will be open source to walk you through how you can get an easy to write to have a work kind of an example. Can you hear me besides Mike? No. No use. Just a question. I get this when there's like a single governor. How would this model work if you had to share the trusted logic to ensure that multiple parties are running the same exact trusted logic and agreeing this is for you. If that is possible would you distribute that trusted logic in a way? So can you repeat the question? Well it seems like this model works if you have a single admin of the system and you have trusted logic that's put but I'm curious when you have multiple parties that have to run the same exact trusted logic. How do you guarantee distribution? You guys may have more specific answer. My view is the point that you're using this centralized component there is that you trust it. Typically you don't trust a centralized component because there's operator, there's some code that you don't know hidden away in some machines. You can't trust a centralized component, right? But this is the special kind of centralized component that you can trust because you can ask Intel to give you an attestation that's signed by Intel. If you found it wrong you can sue Intel and that's pretty good thing to do, right? Which means you can trust this and have to run a decentralized system. So you wouldn't use this for a model if you had like calculations that you wanted multiple parties to run? So maybe we'll have an opportunity to present our use case but basically it's a very good question but SGX allows you to attest the code that runs in IDL play. So you publish it and everybody ever participant can verify that what runs in IDL play is what has to be agreed upon. Yeah. Can I answer that question? Sure. See this is why we have all these people. So we built this TCA implementation as a blockchain. So we can have multiple nodes from the SGX instance. We have a smart contract language that complements these nodes and then you can form a blockchain actually build a blockchain kind of concept sort of this. So I can... You would be the answer. I think in the end of the time I think we'll head going because he answered him about the attestation and things like that. So... Yeah, my question goes out along the same line if yours everybody is that single person of control could it be replaced with some sort of a smart contract that actually defeats Oracle to the events that would keep the conditional conditions if that is possible which feels like it is. In which side does that smart contract go? I mean on the sense of would it be on the on-chain or on the off-chain side? It would be a combination. You would have that kind of like execute on both sides and you would execute the part on the on-chain plate or the group on the plates but at the same time you're going to have on-chain information that will meet the cryptographic group or the execution result actually. That is a typical way better of that we're going to use in the... Hyperledger Avalon. So... We heard about the EA exciting use case. We're going to hear about and we have a privilege to talk about the most infamous and the most boring use case, Hello World. So without that though we cannot move forward. So what actually brings together the all use cases you're going to hear today about is actually that underneath we have the Hyperledger Avalon implementation of the trusted compute specifications. So that is how Hyperledger works. Before we go into the... Initially it was intended as a tutorial so we don't have the ability to do this hands on so that is why I'm going to kind of go through the steps where it takes to build Hello World application. But then first we need to start with a little bit of the background and the architecture of the background is very little just to understand what is needed absolutely to build the application. So architectureally Hyperledger Avalon includes three parts. It's a trusted compute service that runs the workloads and manages them. There are smart contracts that run in Ethereum that allows you to invoke these workloads in the trusted compute service and there are always the requesters who wants to make this execution. Colors come very different so now a little bit about the colors. The dark blue in this case is actually that is the application code and everything else that is the code that is the implement in the common for the all applications. So the applications in order to build application you need to build three parts. The front UI you need to build the smart contracts you may use the smart contracts provided by the Hyperledger Avalon or you may need to modify them and build your additional smart contracts. But in reality this part is actually well known how to build but the workloads is actually very new. That is why we are going to focus today only on how to build these workloads. So we want to pretty much to build the Hello World application and learn how to build your own code. So our Hello World application is going to be pretty simple. It's going to take as an input list of names and it's going to prepare the Hello to teach name and it does this as a result. And we encourage the building the application and any workload in the two stages. First get it plugged into the framework and then actually to add your business logic. So we are going to follow the same pattern today. So there are all dependencies that we should be aware of. So each workload has to inherit from the class process workload and it has to implement one virtual function and it needs to include a couple markers and we will talk about the details of that. We provide a number of templates that actually make the pretty much include the whole boiler code so the boilerplate code so you don't really need to write that yourself. And finally for this in order to start your development you don't need to develop immediately your client. You can use a generic client provided with the framework to test your application. So at the end I'm going to distribute the USB as I said that we plan to do this hands-on experience so they include kind of like a cheating version so it has the all components pre-installed and it has the stage one pre-built. So you can kind of like sample the other one without really going through the full installation and see how it works. So this is actually let's start kind of from the end. Stage one what we're going to see if we would have the stage one what would be the result. So first of all we would start the listener this is the logo of the screen and we would start the client application in both cases just when we're going to try don't forget to source the files in virtual environment otherwise nothing going to work. So when we're going to submit a command the important parts we submit the hello world which is the idea of the workload that we built and the red data Alice and Bob. So we're going to get the result error under construction and at the end of stage one that is the expected output because at this time all we care we want to make sure that hello world would cause the messages come to the our workload that we just integrated in the framework but we don't really have our own business logic that yet. Excellent. So what is actually it really takes to get to the stage one. First of all we need to create a folder like in any application and that hello world application name we have to keep because we will need to use this name couple times more in the updates to the files. We will need to copy template files from the folder where we keep them and then we need to update couple files in the among those templates and couple more build files in the framework itself. Okay. So the first of all there are three files that actually allows to build the hello world workload. So first one there is actually one of the template files that is the top one controls how to build the static library. The second one is actually the includes the building of this particular static library into the overall build process and the third one is links that into overall set of workloads to be loaded in the included. So the highlighted items is actually what really has to be changed because everything else provided as templates. Next one. Then we need to update at this stage one source file, the scatter file. So we, again, we have to make these three changes. So two of them first and last really the same. We just need to change the namespace because if we are going to use the same templates to, for multiple workloads they should not compute and we should keep the compiler there. And finally we need to set the workload and by the way I made a mistake. The hello dash world, not the hello underscore world. Next slide. And the same slide. Next slide. Oops. Okay. So this is actually help. So this slide is actually I just have initially the different color. What is actually shows here not the changes but the just elements that are important to understand why they're here. So we have to inherit from the workload processor class. We have to have this macro that actually allows us to clone the class for every execution we need. We need to implement these virtual function and we're going to look at these implementation a little bit later. And finally there is a helper function that just makes adding the output elements easier. Next slide. So now if we were done with this stage one, so what would happen when we're going to be at this stage two? We're pretty much going to modify two more files three more files and this is logic that is where we're going to get the logic. The original file plugin that is actually the file that kind of links your logic with the framework. And we rebuild this stuff and when we run the same command we're going to have the proper response to the low work. Next slide. So logic is actually where the developers will spend probably 99% of their time and in our case it's very primitive. It's just the function that takes the string input string as a parameter adds the hello and returns the result. And now before we're going to look in what we actually have to change in the file that links our business logic with the we need to look in the previous implementation. This is a stuffed implementation in the delay. So it has this register workload processor that just simply has to be present. There is the function process work order that has number of parameters that describe the workload submitted for execution. And the most important in work order data and outward data. And it has a hard string error under construction that is converted by an array and the result. And as you can see it completely ignores input. And that is the function that we are going to implement in order to link the logic of the skill world with the framework. So in this case the for loop actually allows you to integrate each input data on the first iteration that will have Alice on the second that will get Bob. Then what we're going to do the data comes as a byte array so that you have to copy and convert it to string to the one process call over and once they process the result comes as a string that again has to be converted to byte array and edit to the to the work order result. And those two strings to strings the hello Alice and hello Bob going to be returned to the output eventually. So the data in the process going to do a lot of different processing they're going to be encrypted they're going to be signed but all this is actually kind of transparent for the developer of the workload as long as you provide this function that is going to be called the and you provide the workload ID the framework will find your workload and will send the data to the workload for the execution. That's it. So like so this will link you to around plumbing and we start using back compliant so now we have three case studies that three companies are building on top of this framework. The first one is from Santander and Smuck will walk us through that. How much time? 5 minutes? 10 minutes. Just take one. No. I'm I'm from Santander and we are kind of a special car child of EA because we are not software vendor we are a bank so we are not the smartest ones but we are ones with interesting new spaces and we actually do have a lot of exciting new spaces that we can't talk about but there may be some that we can share and SGX as a technology has been our radar we thought it could solve us some problems but it's difficult now comes Avalon comes SDK so hopefully you know we are lazy we are always looking how to think about smart people's work and let me walk you through the problem on my hand side it's very boring because there is not much money involved on the other hand side it's interesting because it has many aspects that we want to cover and that are relevant in exciting new spaces. So we have three friends Alice Eva but they are working for competing bags and next slide please and each of them has a portfolio of the IP customers that he or she takes care and of course it's like you know we are friends but stealing a customer is a problematic thing so next slide with each customer said there is customer data now this customer data is precious this customer data is something to be protected this customer data is something we mustn't share however there are use cases, there are situations that are regulatory requirements when we actually need to process so one very simple COSI AML is when we want to look for people sending money in a site it's not through AML but it's like you know you are making an impression of having money while you are not really so this is the very simple logic that we use the templates provided by Sanji and Eugene and we are contributing as an example to Avalon Hyperlogic Project not only we have been able to do it but the more so smart developers should be able to do it so this is a question of how the process consolidate the data without disclosing it because we know we can have a gentleman's agreement and we know that maybe it would work with Alice and Bob but we all know something about Eva so we are not interested in Eva but now comes the solution and the solution is we do everything in a vault we do everything in a special bunker it's a software bunker it's a bunker made of dreams of Sanji and Eugene and other people in India and other people from Avalon Project so we have this cryptographically secured enclave and we see it with a code that everybody has seen and everybody Alice, Bob, Eva and whoever I don't know why this is missing but there are funny things happening as you move from Ubuntu to Mac to Windows there should be this person so there was this question about how to trust so if you see the enclave with your code you can verify that this is the code provided that you trust Sanji but this is a minor point we are a bank and people trust us with way more complex things and with way more rationality so agreed upon algorithm enclave so everybody takes an envelope from the vault and everybody puts their data into the envelope closes seals and puts it into the element into the vault now the algorithm kicks in it's being processed but nobody sees the assembled data because assembled data is only in the protected memory so out comes only the list of suspects so we have solved our problem and a little vocabulary here so the bunker is the SGX enclave wrapped in the Avalon framework for our utility the envelope is a public RSA key so as we start the enclave a keeper is generated but only the public key is revealed so everybody in the world can encrypt but the staff can only get encrypted inside the enclave or maybe by Sanji now the seal of course we want commitment we want people to certify that this is the data they want to commit so there is an elliptic curve a cubilist curve actually signed signature on it and now a couple of questions here so one was already asked how can you be sure that there is no trapdoor there is no any malicious code inside the enclave so this you can do by publishing the code and by publishing on chain and this is actually where blockchain comes handy because so far you would ask okay good story but why blockchain what for so one thing is when I advertise my enclave to other banks to other participants I should have certificates to guarantee that one thing is that the code there is actually this and then if they use attestation service if they have those cryptographic primitives they can verify themselves that this is actually the code there is no trapdoor there is no funny thing and the other question that you can ask is how can we guarantee that people don't cheat by submitting false data if I if Charles is my customer and I want to protect you from AML algorithms I may be sending tamper data but this actually is a moment when again blockchain comes handy because I can gather commitments I can gather essentially a hash of the submitted data and then at any time in the future and all they can come to this bank and say okay present the data and compare against the hash so this is not perfect but this is as good as can be because if we protect the data it's protected so there is no verification of data quality of data truthfulness but what we can do is we can gather and store the commitment for future verification so we have done this and we are contributing this is a use case and we have done this in a way that we are properly paranoid about the cryptography so there are two components one is the application and we were ready to present but time constraints network reporting one component is the client you talk to the enclave so that you discover the worker that you get all the cryptographic primitives that you need based on keys and the addresses and the other component is a standalone encryption application so we say okay if you are really paranoid about your data you don't want to encrypt in an app that is online connected to something you just take the public key you transfer it somehow hopefully not on a rush of USB stick to an air gap computer you take it to the basement there you do your encryption there you build your JSON message you bring it back to the application you send it and we stretch a little bit the framework because you can recognize that the worker becomes stateful because it waits for the data states but this is anyhow on the roadmap of the Avalon project there is a couple of other weeks that we needed to do but we've managed to pull it out so thank you very much it's really exciting these are screenshots so we know that you can connect at the moment the Avalon uses those pretty names to identify workers so normally you go to blockchain you would have a list of workers available these are the names something to work upon you press details and you get stuff that you actually need live the encryption public key so this is the stuff that you copy paste that you take from the basement computer and the other is a screenshot of the standalone encryption app so first you need to define how many data states are there, then you need to submit your data as how much the value files in our you will be able to work with this example because we are committing this to the Avalon project it is already on the branch then you of course you create the JSON file you review the JSON file before it is encrypted then you encrypt the JSON file with the public key that you submit so it's a very simple Python application but it actually guides you through all the steps that you need to create a ready encrypted signed hashed message that's what the service would accept and process and yes the basic thing is it works and I invite you to experiment like this thank you at the second use case we'll talk about is attested oracles from chainlink alright, hello everyone thank you very much so we have Yuan Yi then from France so today I would like to talk about how chainlink can connect the TCF to real-world data so when you started working on chainlink basically what you wanted to solve was how do we bridge public blockchains to real-world data because after all a public blockchain without market data insurance data, IOT data doesn't have a lot of use cases and we feel it's the same for the TCF the TCF provides a great framework to scale and to compute smart contracts in a private matter however how can we provide this hyperledger Avalon with data and how can we make sure that this data is as accurate and secure as possible so currently the problem is the hyperledger Avalon doesn't have access or safe access to data input because it's a great framework which can access real-world data and all of the great use cases require real-world data and events basically what you want to do is give access to the hyperledger Avalon to payment methods such as PayPal, Swift where basically if you were to compute a smart contract in the TCF framework you would trigger excellent actions right? we also want to be able to allow to allow, next one sorry my slides are a bit basically so we have multiple use cases so we want to allow the TCF to get access to real-world data so market data from Bloomberg from Reuters we also want to be able to trigger excellent payments with the TCF and finally the third use case is to basically allow the TCF to gain access to other reflections so basically let's say you were to compute a smart contract inside the TCF you could trigger a Bitcoin payment or an Ethereum transaction or even Ethereum smart contract so those are all great use cases that we've been working on implementing basically next one alright so currently we also want to guarantee end-to-end reliability so the only idea is that we use public blockchains in order to be dump approved transparent right? we use the TCF in order to scale and to guarantee privacy whenever we compute smart contracts and we also want to make sure that the data that's being fed into the system is as secure as possible after all if we were to have a system which is processing real-world data inside an enclave but you're not sure about this data then you would run into a huge issue why do you have this super secure system which is being triggered by super insecure data that's your problem so what we've been working on at Channing is basically being able to have multiple sources which are fetching data from multiple data providers in order to make sure through redundancy that the data you get into your smart contract is as secure as possible and I believe it's a huge importance for the TCF to gain access to data which offers redundancy and which offers basically incentives where people will be incentivized to provide the right data points without being malicious whenever they're giving out these data right? so basically what you want to avoid is this architecture you have a great robust system so currently Ethereum and Hyperledger have a long write and we don't want this great system to be triggered by one centralized oracle because if you have a super reliable system it doesn't make sense to have it triggered by one single centralized oracle that's what we want to avoid we want to have redundancy at the oracle where we have multiple oracals providing data from multiple data providers so next one and yeah that's what we implemented basically it's working and we're going to commit it to the Hyperledger Avalon it's a way for to provide data to the TCF using chain link nodes and I'm going to run you through the process how it works exactly so we have a request for contracts which is on chain so it can be on Ethereum on Hyperledger it can be on any blockchain you could think of and we're going to send a non-chain request to a chain link contract okay this chain link contract is going to send to trigger a chain link node this chain link node is basically run by blockchain infrastructure companies who are running on Ethereum or any kind of blockchain full blockchain clients and a chain link node and it's going to retrieve the external data from an API endpoint now this API endpoint could be Bloomberg it could be Reuters it could be any kind of data you could think of so once you have retrieved the data we feed it to an adapter basically this adapter is going to encrypt the data and send the work order to the TCF the work order is basically the sequence of operations that needs to be processed inside of the TCF and now for the most security conscious people in this room you'll probably notice that whenever we retrieve the chain link the data and it goes through the chain link nodes there is a possibility for the oracle to grab the data right at this point and that's where the beauty of TCF really comes in let's say I want to make sure that I want to prove that the data hasn't been tampered with by the chain link node right I will ask the data provider to sign this data and then I can verify that this data was signed and I can verify the signature inside of the TCF now verifying signatures is a very, very consuming process it's not something that you want to do on-chain it's extremely computing so if you were to do it inside a contract on-chain it wouldn't be a viable option to verify this data however here with Hyperledger Avalon you can just verify the signature on the TCF level and that's a huge strategy basically you can verify that the data was signed by the data provider with it and what you offer also is running the same data through multiple chain link nodes so here you get attestation of the data and you get decentralization through redundancy through having multiple chain link nodes providing the same data points for multiple data providers so we can always get more secure but I feel like this data is already quite secure right what we are feeding right into the TCF has already many security assumptions they try to provide by combining the TCF and combining chain link nodes so next slide so once we have gotten this data basically the process to consume it is very simple we've computed the data so let's say we want to let's say we have a data provider or a smart contract creator who has a use case, he wants to get a price but he doesn't want to transmit this price on-chain so maybe he do some computation of chain using the TCF and he returns the request or the result using the same process we go through the adapter, we go through the chain link node the chain link node feeds the answer to the chain link contract which feeds it into the consumer contract so a very easy process now I didn't give any clear kind of use case just because the use cases for this are limitless you can get any kind of API endpoint to feed data towards my contract the TCF are using chain link nodes and here it goes from insurance contracts to global trades to trade finance basically any kind of use cases that needs data doesn't want this data to be published on-chain because financial enterprises or company maybe don't really want to publish all their trades on-chain right but still want to use the security guarantees that TCF and Ethereum and public blockchains offer so that's a beautiful way to do it basically and the possibilities are limitless so I think I probably ran out of time so I'm gonna end here I had a few other slides but I think it's fine yeah just I would like to finish by saying I think that that was a beautiful effort that many companies led through EEA, through Hyperledger and yeah I'm really excited to see where it's gonna go these use cases and in the future what you're creating here will have a huge impact on the industry I think and drive companies to start using public blockchains more and more so thank you very much so now we have a last with Sean from EGMA my name is John Segrin I'm one of the co-founders at EGMA and I did a product there with my colleague Ainsley who's back there in the room today it's great that I get lost you mentioned an application which back in there we worked together with Julio when he was still there and then you mentioned how to get there securely in the blockchains I'm going to talk about how we've tried to build a blockchain that is an implementation of TCF so at EGMA our goal is to ensure adoption of the ability of decentralized technologies we focus primarily on public blockchains the way we try to do this is by enabling blockchains, public or private to handle sensitive data and we've built a computation platform that enables blockchain functions or smart contracts to run with encrypted inputs and this is done using the TCF framework so I guess the problem of privacy is very obvious I'm not going to spend time on this this is something that's been very dear to us we started our research in 2015 when we were at MIT and at the time we were focusing on multi-party computation we wrote two papers that were on the most side of the industry fast forward four years we're at the place where we're about to launch our network which again introduces this concept of a secret contract secret contract is a smart contract that can run with encrypted inputs and we're seeing that this has a lot of use cases in the public blockchain ecosystem both in terms of enabling new applications that haven't existed before but also creating a much better UX in certain cases today I'm going to talk about Discovery which is the first release which will be the first release of our network in the next couple of months Discovery is interesting because it's an implementation of TCF we actually went our separate ways but we converged on the same spec more or less it improves on TCF in two ways one is it is a stateful network where the state is private and the second thing is it is compatible with Ethereum which means something that or a contract that it keeps on enigma can call a contract on Ethereum to move funds to mint tokens or store records of that sort so I'm going to talk a bit about the network very quickly there are developers who build apps similar to Ethereum users who interact with these apps and then finally workers who execute these tasks these workers run into the subject devices and that's how the network achieves its privacy future on a high level the way the network works is the middle thing I have there is the secret contract concept network again the interesting thing is the inputs are encrypted and the state is encrypted at all times so what happens is the user sends the input while the user is sending the input they are using our library which ensures encryption locally and then sends the encrypted input to the network then we have built a P2P network where all the basically state is kept and the communication takes place that also is encrypted at all times the worker who is chosen to work on this task based on a proof of state algorithm has the ability to decrypt both the inputs and the state of the contract do a computation on it and then update the enigma state once the enigma state is updated it remains encrypted it is visible to the user who has created this transaction and as I mentioned we can do a computation on it here I was going to focus a bit more on how this applies to enterprise but I think it's interesting to show how this can be a use case that can be used on public continuum so what I want to do is I want to talk a bit more about what we are building or what people are building using this system we recently built a coin mixer to achieve transactional privacy on the Ethereum we see a lot of different presentations that try to do that we use our network and the secret contracts in a way to collect encrypted recipient addresses and then shuffle them in order to achieve transactional privacy there are a lot of DeFi use cases that are interesting in this there is a lot of use cases that require options in order to create efficient marketplaces this could be things like creating order books or this could be things like having an option to say like creating a fixing product using the interest swaps on the DeFi ecosystem but there are things that are around this this applies to a lot of gaming situations that we see where there is a multiplayer situation a very simple example is got paper scissors where you can take it to the next level and build a poker game that is completely decentralized and cannot be shut down by governments which I think is very interesting governance kind of falls into it because voting and things like that require comment reveals can hugely benefit from it and that's the interest and attraction we were primarily seeing in the public team level I think one thing that's interesting for the enterprise folks in the ecosystem is and this may sound a bit old but I think it's a point worth making think personally the current enterprise blockchain significantly improved on the steady scroll of having databases and the reason for that is primarily our privacy we have a lot of consortium blockchains which creates further data silos and have the and fail the ability to tap into the network effects that's a great value with something like Enigma this can be addressed because now you can have an actual blockchain where you can connect to different blockchains and parties who have conflicting business interests can interact with each other without revealing anything both from their business logic and things that trigger their contracts so we think this is interesting going forward we're excited to contribute to these specs the hyper-religerated app on specs mostly around how to do share knowledge on how to stateful computation around the specs so that's my presentation thank you very much so I think like you know when we started I was quite worried that we have so many people presenting that will run out of time but that was basically the session and thanks to everyone you know there's a lot of hard work that was put into this and the main point is that the specs are mature and you heard about this hyper-religerated Avalon it's basically EA cannot hold source codes so this workgroup was set up in a hyper-religerated another Linux foundation and that's why you know all of this code that we're talking about will be placed and there's a community building there already there are like 15 to 16 companies there that are actually contributing both the use cases and to the specs so we look forward to have as many people join that effort there you know and then the thing and the thing is like you know the framework that Eugene talked about we wanted to basically do it here but given the situation we couldn't do it so he has the whole framework on the USB sticks apps and you know we can just pass it around and people can just pick one for themselves