 So in the decentralized environment and taking up the consideration that malicious software can get somehow to your computer Transworthy computation goes without saying it's a need. It's a must. But different parties may need different aspects of transworthiness in terms of computation and This is an important question of why we really need It's computations. So for example, we may need to deploy a peer-to-peer network of decentralized Computers with a non consensus algorithms, or we may need to Deploy a smaller consensus between two computers to compute something and The second use case requires other aspects of trustworthy as than the first one in the first one unit Bounds on the malicious actors the number of them and some availability in the second case You probably need confidentiality if you would like to decentralize the centralized service that hosts as no purpose Computation then it is quite a different deal and there are more So we had to ask this ourselves this question What is important from our perspective at golem? So just a short recap in context of this presentation Golem is a network of petrogens resources ranging from individual machines to a subnetworks exposing their compute power to the network and two kinds of Participants that interact with both the resources and with each other the first one is a requester the participant who would like to Do some computations using the resources and the second was one is the provider Provider rents out the resources and perhaps would like to do it in a secure manner so this is the Basics and this is the fundamental layer on top of which of course we need to build another layer for example economy But in context of this presentation I focus only on this infrastructural part So The problem statement is pretty easy the requester wants to carry out Computational trustworthy manner on the resources provided by the provider and the provider Should be safe in this setting should be protected from the malicious software malicious binaries and this Simple problem statement maps pretty well to high-level requirements. So yeah requester wants to be able to run any any binary in this network Perhaps efficiently and this binary Shouldn't need any additional changes just to be run within the network Task input used by the requester should be exactly this by the provider should be exactly the same as request or provided the binary That is run on the providers machine should be exactly the same The two questions are wanted and the environment environment should be exactly the same as request or wanted This is different from the first requirement which states that request or should be able to choose Any binary any absolutely any binary right here. We need to make sure that no one Can interfere with the binary before it is run The execution should be carried out the valid way no tampering no interference Which means that the requestors can expect their valid results provided that the task was encoded the right way The output data shouldn't well it cannot be in fact altered by anyone in a way. That's undetected to the requester and Only the requester should be able to show the data to the external world meaning that perhaps The application That's taking that's hosted on providers machine can look at the data But other than that no one should be able to leak this data to the outside world only the requestor if he was to do so And let's not forget about the provider. He has to be protected from the code as well So how can we meet these requirements? There are a few approaches Quite a few the point is that there is no single bullet some silver bullet for this all requirements and different approaches Result in meeting some of them or all of them in some way But not all of them with all the required features. So for example, we can constrain ourselves to Deterministic orders for his proof of work exists the trade office that we only can do deterministic work We may sacrifice confidentiality and get into integrity But yeah, we lose confidentiality on the other hand. We may for example use third-party sources of trust This is either a third-party service that is going to Provide trust to the actors or we may simply use some checkpointing to make the cheating Less feasible at the same time we can use infrastructure such as trusted execution environments or mix of their off so As I said, there's trade off for every Approach and it's either inherent to the method such as using proof of work or task specific for example In rendering you can be interested in confidentiality if you render a Movie, but if you render open source animation, it's may not be so important to you wasn't care about the integrity It's only also anti-dependent for example for trusted execution environments memory access patterns may have profound impact on the Efficiency so this has been taken into consideration from my perspective He wanted a solution that is generic easy to use has well-specified Security considerations known to us and to others L allows for remote computation because it's what what go and we do so as you know our Trusted execution of choice is SGX Or more importantly, it's not only SGX the technology stock that can be built on top of the SGX And we achieve it by means of Graphi and Graphi ng and this is quite interesting stock and quite fine because it allows us to provide generic computations Meeting most of the requirements with more or less known Efficiency considerations and known security and considerations We know that the communities around building those well evolving in terms of For example security and efficiency and This is that's why we moved to our discussion. So just a short recap what SGX is SGX is an Intel technology. It's an architecture enhancement to the processors allowing protection of application and the data from the Processes on the same machine even the privilege once this happens in so-called enclave compute model and Additionally, there is a Way of making sure that the computation takes place in the enclave. So it's called the remote attestation So this is quite powerful and quite good. The problem is that well, not a problem the issue that from developers perspective it's Kind of limited so we can only run Application if you code them from scratch preferably using in Intel SDK You cannot run arbitrarily binaries. So they have to be modified And by default you have to specify static interface of interaction between the application in the enclave and the entrusted part of the application outside of the enclave with the host and This is well something kind of limiting so we get this powerful feature of Running in secure enclave But at the same time we cannot run arbitrary binaries, which is important to us but this is a good starting point good building block for something more generic and The next step from our perspective is graphene Graphene is a Libo S based framework which allows you to run arbitrary Linux binaries in enclaves using all the features of SGX and From the application point of view This is just as interacting with the regular rest. So this is important. This is completely unmodified Linux binary and why we preferred Graphene over other approaches. Well, there are a few nice properties that we're interested in and the first one is that the Libo S approach clearly states the distinction between the trusted compute base and the attack surface. So as you see Libo S is Bigger than the alternative approach. It contains more code. It increases the size of the TCB But this is the part we and invisible thing labs can work on mostly invisible thing labs So they can put lots of resources to making it better not only them the community So we can improve on that But we have absolutely no control over how the user is going to abuse the interface So the smaller and the better self-contained it is that it's simply more secure Libo S is good for sandboxing. We still want to have our provider security when we fill out this additional layer and It's pretty cool because The Libo S can be used in a way where it's easy to replace host or guest of us So what's the difference between the regular way you run the Computation and the graphene well In vanilla SGX well using the Intel SDK you have to get the application source code Tyler it to the enclave compute model specify the interface Compile it and run. Okay. So this is quite different to what you have to do here You get the arbitrary binary right now. It's Linux binary Ubuntu and Debian, but other than that, it's an arbitrary binary You pack it with the graphene and good to go Almost because this process still requires so many will work to configure deploy and run the application That's why we took the next step. This is called graphene ng which is graphene plus a bunch of features That are interesting and should result in better UX regarding both enclave lifecycle and deploying the application So the protected files the protect files is a library which allows secure or encrypted communication between the owner of the enclave or any party that initiated the computation and the enclave itself and host which hosts the graphene and the enclave Cannot Interfuse the files in an abducted way and that person port is important because as I said we want to make sure that the Computing side the provider side is securing at the same time. It makes it easier to Configure the environments Tools and scripts are there for UX and bug fixes. It's very small point, but it's very important making graphene stable took a lot of work. So it's a lot of work put by invisible things lab and Although they're where no or almost no Important security for vulnerabilities. There were quite a lot of bug fixes related to the stability of the solution So for us it is important that those arbitrary binaries can really be run and this requires stable graphing So the features where they're I think there is a way to Showcase how it works is just to show an example and by accident we have one such example It's a goal and integration with brass. It's graphing energy with blender It's proof of concept for now due to the stability issues for example, but other than that it is a working example so Let's take a look at a few points of views Of how to use the graphic the first one is the providers point of view So the providers just have to Prepare an image with arbitrary application preparing this image is mostly automatic the manual part is single Action when SGX have to be enabled this may require some bios tweaking well, not tweaking just switching something in bios and Configuring protected files which requires enclave manifest and Specifying the run docker parameters during the run because enclave has nothing to do with docker by default. So Those protected files have to be configured on both sides in the enclave and with docker, but Other than that It's almost automatic. So the user prepares the container driving it from the provided one the one with ground MSG X template It is the one that's ready The user app is whatever the user chooses to be the key is something that user can either generate Using a script or it can be Used by well from other resource and the key can be is then used to sign the content container It happens with script as well So it is mostly automatic exactly the same process was used to prepare golem integration and the blender integration There's nothing different only thing is that we have a specified application this arbitrary up and the arbitrary means blender here okay, so The point of view of the handshake process. I'm not going to describe it in detail, but the point here is that this process allows the requester to Connect with the enclave the provider machine and hosted binary to entrusted well entrusted scripted channel and What's important here is that it's mostly automatic and if it's not then there are scripts to assist here so it was important for going integration because the user interacts with the Golem by UI and is not really interested in seeing all this So this happens automatically and there's script to verifying quote for example and Yeah, the more interesting point of view of the application, so The application is point of view is what you see on the left Simply nothing applications sees it as a regular or S or a regular a O and That's it no changes if you take a step back then you see that the framework is there The logic is there and it's doing his job under the hood for example. They always encrypted and decrypted and fly so It's it's there to be transparent to the application and in fact to users Okay, well more important an interesting point of view is requester point of view. So requestor still has to interact with the notes in the network and Connect to them and choose the notes. We should compute something to him But it can be envisioned as if the requestor was using more locally available resources Of course, these resources are not available all the time, but other than that It it can be treated. Well requester can treat them Just as maybe not all the time available local resources making his computer more powerful So yeah, as I said, we have a working example working integration You can get more information about it at our booth and see the demo What we achieved with this integration is well We proved that the graph in NG approach is the valid one and that the features that it offers can be used to make such integration and It offered what we wanted confidential remote computation With arbitrary binary. Well the blender is binary of choice, but it could be any but binary well, well Linux binary at all and What's more important is that it's a clear decoupling of the binary in the in the application That is hosted on the infrastructure in the infrastructure itself. So the infrastructure is SGX and the application is any application which can be seen as a SGX as infrastructure from well in perspective, which is important because we can Rent out we can providers can rent out this platform Okay, so this pretty cool, but there are still There is still some more work to do both on our side and general SGX Intel site you name it so what would be fine to have is Liberated in the interstation service so that you don't need this central point to attest your enclaves flexible launch control meaning that for now Intel controls the way the enclaves enclaves are launched and can potentially This allows someone to be launched. So this should be you should be able to Write your own launch and play friend lunch any and play a few you will you wish another thing is that there are some known attacks to enclaves and Mitigation steps are required for this attacks and we need to increase the efficiency of the solution so This is from the SGX site. We want stable and efficient graphing and This requires backfixing proof of computation for economy and windows support because right now we have only looks binaries, so This is proof of concept. This requires additional work, but we can Talk about use cases that potentially can benefit from this stack. So the first one is going specific is local verification Hey Peter, you only have about two minutes. Okay, so we look at verification Just make sure that the verification takes place on provider machine Usually it takes place on request or machine and you don't lose any just working as this way Other one is going unlimited in going unlimited. You have a long like setting you've trusted Computers you can equip them with SGX me making a more powerful SGX node and Exposed to example to encode transcode movies Identity management inside the golem network golem unlimited network. This makes sense because it's easier to seal the identity as well keys inside the enclaves and provide the nodes and Work with them automatically Or other use cases that people can come up with should be able to Benefit from this technology wider audience for example decentralizing server services We can do it using the SGX of course It is not an easy task because decentralizing may require additional algorithms, but the building block is there And it may be potentially used by sector multi-party computation you go and it may be used to decentralize constant service Atomic swap well, we can do it. Well, we can do it. It can be done It has some problems because in atomic swap you need to make sure that the applications are not only not forged But they reach the blockchains But if you use multiple enclaves, then it's easy to reach the consensus and push that push the transactions This can be used to for example as a building blocking the distributed exchanges We can also seal keys All right either locally or remotely is just the question of whether you trust the technology enough to seal your keys remotely And this way you can for example Store your identity in a distributed manner both geographically and by making K of n signatures acquired you Can express the notion of uncertainty towards the how do you trust the tea if you trust it Then there's one if you don't trust it that much then there is more and In existing projects it can be used as well So for example, we have a menu manual viable plasma with the central point being the plasma chain operator we would like it to be reliable and Not to cheat so let's just enclose into enclave and at the same time We would like to make it Reliable in terms of block withholding, which means that we can decentralize it and make sure that it's harder to Withhold blocks by an operator on the one is Horde Horde is a platform for governing assets in games and binding it to players so that players have the true ownership of items and this requires Keeping the notion of state on the server side so games again servers in the internet player plays the game and The game server keeps track of state and make sure that player doesn't shoot Well, in fact this most of the logic can be run locally in the enclave And we only care about the integrity not to confirm confidentiality under step is decentralizing the whole system and making this local game server connect with others and build a bigger server and the last one is data streaming so We have this intermediate layer of SGX enabled machines and content creator creator uploads the content and by virtue of What SGX offers? Only out there as clients can download the content The video streaming is the most obvious example here, but we can also envision processing the data aggregated in data and hosting some custom logic for this data management, so users can Benefit from it not only but downloading the data about getting some results Resulting from the data and this may be a building block for wider or a bigger network of data processing Okay, so I know it was dense. I know it was quite a lot of information I hope that you got a glimpse of why the technologies that may be interested or it may be interesting to you It is definitely interesting to us. We are going to work on it. Well, and thank you