 Hi everyone, my name is Andre and it's a pleasure to be here today talking to you about our work on confidential workloads with Spire. So I'll give you a brief introduction of confidential computing and basically it's about protecting the data and the code of the applications and services because we are applying encryption to data at rest and data in transit for a long time but the data in the memory is still in clear text and that can become a vector of attack. So the idea of confidential computing is that it can use hardware technologies to isolate these memory regions from other processes even higher privilege processes like the operating system and the hypervisor. So these environments are called trust execution environments and they are especially good for personal data, financial data, health data that can be very sensitive and even some cases where you have different stakeholders which have different data pieces and they want to combine these data pieces and train some model but not share the data directly with each other and so SGX can and other trust execution environments can help you protect the ownership of this data in these cases. One other thing that is being used for is to shift trust away from humans because you can have secrets that are created inside enclaves and that are passed from enclave to enclave without never being clear text in the memory and without never have to be seen by a human and that can help you to protect integrity and confidentiality of these secrets. And when I'm talking about trust execution environment I'm mostly talking about Intel SGX which is the one that we have been working with because it's the most common so it's available in several cloud providers in server of the shelf servers. And Intel SGX more specifically it helps because it has a reduced trust computing base so typically you have a trust computing base that includes not only your workload but includes the whole stack so the operating system, the hypervisor and the hardware. And now when you start using Intel SGX what you get is that you take the operating system out of the trust computing base and also the hypervisor and stays with a much reduced tcb and in addition to this reduced tcb you also get something that is called the remote attestation and the remote attestation is based on the fact that one running application so one application that's running inside an enclave will have a signature which is a 256 hash of the state that was constructed when it was loading and this state includes code, data heaps, stack, platform states such as if hyper threading was enabled or not, what is the firmware version that the processor was using if it was on debug mode or not and this is very helpful for you to recognize when the correct code is executing on the correct platform. Remote attestation is especially useful for our investigation here and it's based on the fact that challengers so remote applications can query one application to prove that it is executing inside an SGX and what is the AMR enclave that is associated with it. So these applications that are being attested have to negotiate with a local enclave that's part of the infrastructure and this local enclave will help the application to construct a quote that is signed by a key that is private to that processor and this quote will be presented back to the challenger and then the challenger can validate this quote with the attestation service and once the quote is valid the challenger can look into this report this quote and check if the AMR enclave and the platform characteristics are what it expected. So today the attestation service can be using the new decap driver can be some infrastructure that is on premise or on the cloud provider so that you can validate the quote as I just mentioned and what are were the drivers for integrating computational computing inspire. The first one is to secure this part component itself right because once you have this SGX enclave what you can get is that you can protect the server by protecting its integrity so if some code in the server gets changed by a malicious attacker the server would lose access to the database would lose access to the CA certs and that would prevent this attacker that got control of the server code to do something malicious. The other thing is that you can also protect the confidentiality of the secret so even if the code is intact someone could look into the memory and steal identities, certificates, private keys by executing these things within the enclaves you don't have this anymore. On the agent side what you get is that you can again protect the integrity of the the spy agent code and you can also protect the cached entries on the agent so if someone gets access to the machine and controls the machine and the spy agent is running on that machine the spy agent would have cached several speeds and the speeds could be stolen from the memory and SGX enable you to to block this access so that the attacker cannot steal the secrets from memory. So and then the confidential workloads they also gain a lot when they are combined with SPIR. First you have a more robust workload that is stationed for SPIR so we have already the ones related to Linux the ones related to Docker Kubernetes and others but now you get one that is based on the hardware support and that can check if the correct code is being loaded in an up-to-date processor and from the confidential compute inside you also gain a lot when you get the support from SPIR because now you can have one single way to handle identities which which are the SPIR favorifiable IDs but you abstract what was the actual attestation to that identity for example you could have one identity that reflects a highly trusted workload and this highly trusted status could be gained because the workload is executing inside an enclave even if in a less trusted environment or if the workload is executing without the SGX support but in a trusted environment right so a on-premise for example that you place that you trust in this flexibility simplifies the operation between SGX and non-SGX workloads which is a nice plus because SGX by itself could have a high highly steep learning curve as we advanced before our development we faced several challenges and I would like to tell you about the feel of them and the first one is related to the threat model so when we change the threat model of an existing system we have to make sure that we have safe defaults and that we make clear the trade-offs and this can be confusing to the user because if the user configures agent that's not running inside SGX to a test workload that is running inside SGX what does he have now so it does not make a lot of sense because a attacker that gains access to the machine could steal the identities of the confidential workload and pose as it the same way as if you protect a server with SGX but the database is not protected by SGX so what you have things would work but an attacker could change the entries in the database forcing the server to sign identities for malicious workloads so this is this is tricky and you have to make this easy but still visible and transparent to the user so that they can understand what they get and what they need to pay attention to. On the workload attestation side there are a few challenges one is that the regular workload attestation plugins they rely on things that come from untrusted sources of information in this new threat model because now the Kubernetes the kernel the Docker engine they are not trusted anymore so you cannot use the sources of information and even the process ID that you use to start getting information about the workloads can be changed so you could get one process ID try to talk to this process ID for doing the attestation but then an attacker reroutes your query to the original workload so you end up giving the speed to the wrong process and that's something you don't want. The attestation as I mentioned when I was talking about SGX in the remote attestation model is also something that needs to be well considered because in the SPIR model it is done with an out-of-band communication and as I said this out-of-band could be deviated but it's also the case that you don't want this out-of-band communication to give an extra work to the developer because if the application needs to be modified to run with confidential computing and SPIR at the same time you post yet another obstacle to the adoption and finally the code integrity which is given by the MR enclave as I just said it's not enough because we know that configurations and libraries and other things that could be on the file system could make a difference on how the application runs. Operation is the final challenge so there are a few things that we need to consider here. The first one that I mentioned is that VM migrations are not allowed with SGX natively so if you have one SGX component running a machine a virtual machine and this gets migrated to another server maybe you don't run it will not run anymore. Other thing related to the config configurations that were passed by the orchestrator typically now how do you provide these configurations in a secure fashion? Third, if you have operators, CIS admins running your SPIR environment shouldn't they be out of the circle of trust so that you you can be sure that they're not capable of assigning private or highly privileged identities to non sufficiently tested workloads and finally you have to think about everything so you need to consider not only the SPIR server and the SPIR agent but other things that will contribute to this ecosystem such as the orchestrator which should be kept not trusted the database that should be trusted either you run it with SGX or you put in a place where you trust but you have to understand all these points before you have your operating environment. So now I will pass to Matheus who is going to talk a bit about the current state of our development. First of all it's a pleasure to be here today to talk about our current status and the status is that now we are supporting SCON as SGX workloads and you may be asking why SCON? First, using lift and shift approaches like SCON enable us to easily migrate existing applications. Think about that sometimes it is as easy as changing the basic images of your dogfires. Second, leveraging the SCON configuration and attestation service they cast sick of and some operation challenges for us mainly related to ceiling and configuration and also you can save some efforts with the development process. In the SCON world SCON workloads are defined by sessions and think of sessions like security policies and sessions contain policies and staging conditions for the attestation process and also some initial configuration for the workloads. This configuration could include environment variables, command line arguments and also configuration files injected in the file system view of the workload. So if a workload can be defined by its session we can use the session name and the session hash as selectors in this file and to make this integration work we are using a svid star SCON plugin and our plugin pushes identities into the cast the attestation self-soc SCON delegating the attestation process to this trusted component and the process is as follow the workflow is as follows. First we have to press this initial configuration into CAS to then receive the hash of the session and then with the session name and the session hash in hands we can register our workload. When the agent gets this entry it will press the identity into the CAS so that after the deployment process and after the attestation process the CAS will deliver the identity to the workload so that the workload can talk to other services and then receive sensitive data. And now I'm going to show you a quick demo about the usage experience of a solution. So for this quick demonstration I have here a SPF-enabled Kafka and two applications that we will interact with this Kafka a producer that is a regular application that gets svids via workload API and a consumer that is a confidential workload getting svids via our svids plugin. So first I'm going to register an entry for the producer using the container name and the namespace as selectors and then I can deploy the producer the logs yeah it is producing WIDs and putting them in a token to Kafka. So now I have to post the session for my confidential workload and I have a session here that defines the constraints for attestation and the initial parameters for this application but first post this session using the SCOMC I will get the session hash of the session I'll be using the session hash and the session name to register my confidential workload to give the identity slash consumer to that workload. Okay and my plugin already started to push the identities and the CA and everything else it needs into the configuration and attestation service so that now I can deploy my consumer it will get attested and then receive this feed yeah that's it it started to consume messages from the same topic and to conclude this presentation we have someone going and next steps one thing we are working currently is how our good ways to manage these beef namespaces and take the operator out of the circle of trust we also are collaborating with some folks at TU Dresden which are investigating ways to enable confidential workloads to use the workload API so code that is aware of speed should not need to be changed to use the confidential computing attestation plugins and then finally we can consider some more components of the ESPY ecosystem just like upstream authority plugins the usage of federation inspired database and so on luckily there are some confidential computer alternatives for the database like MariaDB on top of scone and also the AgelessDB yeah so I think that's all thank you everyone for their attention