 So hi everyone. It's a pleasure to be here today and talk a bit about our work on identity provisioning to STX workloads. And of course to start I would like to tell a few things about Intel STX. So it is a trust execution environment and the goal of a trust execution environment is to create an area where code is protected from other pieces of code. And in this case with STX you want to protect not only from other STX code but also from higher privilege code like the hypervisor, the operating system, the buyers and anything else. And you can use Intel STX on processors that support it and there are VMs or bare metals available in several cloud providers. Right so once you have this piece of code running in a protected from other pieces of code then you have this everything that one process does is encrypted. And besides that there is another feature that is very interesting especially for us which is the capability of testing the code. So in the attestation process it includes not only the measurement of the code that was loaded and some other things that were used to create this enclave so to say but also includes information from the platform. For example the the firmware version that was used, if hyper threading was enabled, if it is a debug mode or production mode enclave. And once you have this measurement of the application and you have the information on the platform security this is packed in something that we call the quote. So the quote is produced by this quote enclave and once you have this quote in hand you can verify it either using an Intel service or using the data center attestation primitives which is something that you can install locally. So with these very basic concepts we should think now about how do you develop code for Intel SGX. And the first approach is not the most exciting one is the one that you need to use when you want to, the most flexibility you want to minimize your trascope beauty base, you want to minimize the usage of the protected memory and that means that you will probably be writing from scratch using C or C++. Your trusted code cannot do system calls and you have to implement yourself attestation and mitigations for vulnerability. So this is something that is hard and not very feasible most of the times. So there is another approach which is the known lift and shift and in this case you use some runtime to put the complete service that you developed, the complete workload inside SGX. And that means that in the happy path what happens is that you can just change your base container and then you can, you change your base container and then you, I see that I don't have a camera now. I don't know what happened. Maybe it works. Yeah. Sorry. In the happy path you change your base container and then you just re-secute, build your container again and everything works. In the less lucky path what happens is that you may need to recompile your application or maybe you need to use alternative packages or libraries for example from a different Linux distribution. But it's easier, much easier. You don't have to rewrite your code and when you do that although you need a bit more protected memory and you use a bit more of your protected memory and your trusted computing base you do inherit a few things that can be done in the runtime. So for example if you implement security mitigations on the runtime this can be inherited by the applications running on top of those runtime. Right. And regarding the protected memory according to some recent announcements from Intel that should not be an issue early next year. Okay. And then so just to give you an example about how you can port your application. This is an example of a happy path. So you just have your docker file with a Python based image and then you replace the base image and then everything should work. So in this case what happens is that the new image has a version of Python that was recompiled with the runtime. And in this case we are using the scone runtime, the set of tools that enable the compilation and the execution of SGX workloads. In another way to do that, the second example, then you could have a microservice written in another language like C or even Fertran. And then you can recompile it using the tool set. In this case I used another base image. Okay. So here we follow this lift and shift approach because we think it's more realistic. And if we think now about why we want to integrate SGX and SPIFI, then the first thing to talk about is that we can have a more aggressive threat model. So a threat model where the attacker can have super user privileges and this can happen for several reasons. The operator may be forced to give this access to someone. The attacker could have stolen the credentials from the infrastructure operators or something may be compromised in the infrastructure. So once he or she, the attacker has that, can do anything he or she wants. So it can replace components, can dump the memory and that's the kind of thing we are looking at. But one thing that I would like to take out of the discussion now is that let's assume that the SGX itself is free of bugs. So the processor is free of bugs, the runtime and the libraries. And also that the workload doesn't have some silly API that exposes everything without authentication. And lastly, I'm also assuming that the Spark server will be trusted. So there will be no one issuing IDs for other workloads that are not the SGX ones. So with this threat model in mind, there is a clear motivation to do this integration. So the very basic one is if you have a set of services that you trust, then maybe now you can be more open to services that are running on a remote infrastructure that you do not trust if they are running inside SGX. And then you do not need to make your application more complex, you just give them IDs that reflect these status. The other thing is exactly the opposite. Maybe you have some SGX workloads and you want to make them talk to non-SGX workloads, but you want at least some evidence that they are on a reasonable platform. So for example, that they are on the correct cloud provider in the correct security group, as you do normally with SPIFI identities. And the last one is something that I consider very interesting and promising, is that you could actually put small ghost tunnels inside your applications. So for example, because you have to anyway, when you're executing your application inside SGX, you have to intercept all the communications, and this is done transparently by the runtime, then it's not difficult to wrap this communication inside TLS connections. And then this TLS connection, something that's been called network shooting, could be actually using SPIFI IDs on the outgoing connections and checking for SPIFI IDs on the incoming connections. So in a way, you have a transparent adoption of SPIFI identities. So in our integration, we have four main components. So we have the server. As I said, I assume the server is running on a trusted environment. We have the workload registration process that includes a name, a name of a session. And the session is what is how we call the configuration of an application. And the configuration of the application, it describes not only the executable, but also the state of the file system that supports that execution. So these are the selectors. And here on the top of this slide, you can see an example of registering a suite for one workload that has a specific session name, so configuration name, and the hash of that configuration. Then we need another component, which is the helper. And this name helper and the logic behind it comes also from the discussion that is now happening on the community for the serverless infrastructures. Because this helper will be close to the server and will receive some speeds that it's capable of managing and then pushes these speeds to a store, to a secret store. And in this case, it's an SGX secret store. So finally, we have the workload, which we run in a node that doesn't have an agent. And the workload wakes up and gets its speed, its identity, from the SGX secret store. And then it can use this speed for those things that it always do. And finally, we have this attestation helper, which is the role of that floating enclave that is necessary for the SGX conversation to provide the evidence of the code and the platform. So here's a flow of the process. So what you have here is that, for example, in step one, the developer provides the configuration of his application. He says, oh, my application has this and this configuration files, this and this environment variables. And this is also stored in the SGX secret store. Next, in step two, the operator or someone else will register the workload associating the hash of that first configuration to a speed, to a SPF ID. So once the association is created, then this party server will start meeting the speed based on that ID. And this speed will be propagated through the helper to the secret store. Then eventually, the operator will deploy the workload. And when the workload is deployed, as it starts, it will transparently talk to the attestation service and to the secret store and then get access to its speed. So in our case, we are using the certificate, so the X509. And one thing that is interesting here is that this conversation is transparent and this speed can appear as an environment variable that can only be seen within the enclave or even a file in the file system that doesn't actually exist. It's just seen by the application when it's running inside the enclave and it has already been attested. So once the workload has the identity, it can talk to other services as usual. So I have here a short demo for you. I hope you can see my screen. It's not too small. So here I have a terminal that I'm going to use to trigger the SPAR server. So nothing special here. Then here I want to get the bundle. Here I want to trigger my helper, which is actually a variation from the agent. So we started with an agent and made it a helper. So it's joining the server with a joint token. Then next, what I have here in these two terminals is the service that is composed of a very simple resource that's not protected. And in front of it, I will put a ghost tunnel. So before I put the ghost tunnel, I should register a speed for the ghost tunnel. Then I can start my ghost tunnel. Here on the top, I have my terminal that I'm going to use for the STX workload. So as I said, the developer would submit this configuration, which is called the session, and it has some information in the file system. And once he does that, so here's the hash of this session, then this hash is needed to register the workload. So let me get, actually, I should do the hash. So I will export it as an environment variable for my command to be a bit more readable. So now I have the session. Now I can create my not this one, not this one, this one. I can create my Spiffia identity using that session hash and just one in here. So with this registration, now I could just use my STX workload and run it. Then what happened now was that my STX workload was able to get this fit from the local store and also got this text message that was served by this application. If I was going to try to get this directly, of course, I would not be able to because my ghost tunnel would not allow it. So I could do a few things more. For example, I could change the configuration. So this is the configuration where I set the hash of my executable, what is it going to do, environment variables. And if I change anything here, that would mean that the hash of this session, this configuration would change and this would not execute anymore. But I did already use too much time. Let me just come back here and thank you. Give some thanks to the guys that contributed. So Matheus has been recently active in the community. We work together. I also would like to thank to Gustavo and Nigro from HPE for some feedback. Crystal Fetze from Scontain, the maintainer of Scone. And also I got some very interesting information from even a couple of weeks ago. So I would be very happy to answer any question and just let me take a look at the chat. I think first question is what would you describe as the stage of this work and whether this is something you plan on open sourcing or not? So the plan is for this to be open sourcing. So the very short-term step is to open the proposal, so the request for comments. And then this code is now on a private git, but it can be open. We just need to refine it a bit more to also short-term. And of course I would like to hear from the community around this approach that we are using as mimicking the serverless helper, push helper approach. Does that answer your question? It does. Thank you. Roughly by one. It's hard to answer, but roughly what timeline do you anticipate this to be available in open source? So I would like to create this issue for discussion within the next week. And this proof of concept could be out there in two, three weeks.