 Hello. Thank you for joining us today. In this presentation, we will be talking about how you can use Spire to securely provide identities to serverless workloads. My name is Agustín Martínez-Fasció. I am a maintainer of the Spire project, and I am part of the HPE security engineering team. I'm here with Marcos Jacop, which is also part of the HPE security engineering team. And Marcos will run a demonstration later in this presentation. So let's go through the topics that we will be discussing during this session. We will first have a quick overview of how workloads are tested in Spire in order to provide them an identity. We will see then what are the challenges for the process in serverless environments and how SVD store plugins can help in this situation. And finally, we will see a demonstration of how Spire can be used to provide identities to serverless workloads, liberation SVD store plugins. So if you're familiar with how Spire attests and provides identities to workloads, you probably know that this is a pretty straightforward process where workloads simply call the workload API to get the identity. But what happens if you can't deploy Spire agent in your environment? One example of such a situation is serverless computing. What usually happens in this scenario is that an event triggers the execution of a function that runs in an execution environment, interacting with different services. This model of having functions as a service is increasingly being used because of the associated benefits that it has, like the lack of the need of managing servers and the way that you can easily scale, depending on the load, just to name a couple. But the same characteristics that make it so attractive represent some challenges. This model where the runtime environment of the function exists just to run the function doesn't play well with the usual way that you deploy Spire, where you have Spire agent next to the workloads that call the workload API to get their identities. So in serverless computing, you can call the workload API from your function simply because you don't have it available. And it would be very difficult to deploy Spire agent in the runtime environment. So we had to look at ways to issue identities to workloads without interacting with the workload API. And we explored a different option for that. One of them was to have the function or workload attesting directly to Spire server to obtain its identity. We saw this pull model attractive for certain scenarios. But at the same time, we found some challenges related with performance and reliability. And the majority of the feedback that we got from the community pointed to a push model rather than a pull model. So with that context, we also looked at how a push model would look like where SBITs are pushed to a platform specific stores and the function would essentially get the identity material from a predefined at the store. In that way, the SBIT management is moving out of the execution timeframe of the function, solving both the performance problem and the reliability concerns around the dependency on Spire server that we needed to be available to issue the identity. So the SBIT store asian planning type was introduced with the purpose of being able to store the SBITs in a designated store. So let's have a quick overview at how the SBIT store planning works. This diagram shows a basic deployment of Spire, where you have Spire server with a data store that has the registration entries. Spire asian communicates to that server fetching the identities that are entitled to that asian and keeps the asian cache updated. So it has the SBITs for the workloads ready to be provided to the workforce through the workable API. What the new SBIT store planning type does is to provide a way to identify the entries that you will use to issue identities to servers workloads. And what this means in practice is that when you create a registration entry, you are now able to specify if you want to store externally the resulting SBIT from this entry. And this store action is done by an SBIT store plugin that receives the updates from the asian cache so when there is a new identity it's notified and called. When the SBIT store plugin is notified about a new identity or an existing identity that is rotating, what Spire does is instead of having that identity ready to be fetched through the workload API, it pushes it to a designated store. And the details of how the identity is started and the specifics attributes that each store requires are specified through the selectors of the entry. So for example, if you are using AWS secrets manager as a store, you will be able to specify the secret name through the selectors of the entry. And this provides a pretty flexible mechanism to describe the attributes needed by each specific store platform. So the AWS plugin will need self and things different than Google Cloud or Microsoft Azure and can be also used to describe attributes needed by a completely different store mechanism like it can be a plugin to store the identity material on disk. So finally we see here that the workload running in a serverless architecture environment can fetch its identity from the secrets store in the same way that it gets other other secrets. So for example, an AWS Lambda function can get the identity from AWS secrets manager that was started by a specific AWS secrets manager plugin. So at this point I will hand it over to Marcos now that will show a demonstration of how Spire can be used to provide identities to serverless workloads and specifically in this case to an AWS Lambda function. So Marcos, please go ahead. Thanks, Augustine. Hello, I am Marcos. As Augustine mentioned, I am part of HP security engineering team. I am going to demonstrate how we can use Spire to provide identities in a serverless environment. We will see an Spire agent that is configured to use a split store plugin. That plugin is the one that is storing identities as a zone on AWS secret manager. Each secret will be contained at the entity in the second binary that contains the SP3D, the third chain, the private key, the bundle and all the third bundle that it related with. At the same time, I will demonstrate a Lambda extension. That extension is the one that is communicated with secret manager to get an specific set. And storing that binaries in perm formats on disk. We choose to use extensions because we configure it to start with the container and storing the material on disk. And when the function is called, it will be using the information that is there. The function is a very simple function that only the do-do is to get their attribute extra hand-in-hand certificates and returning them to the container. So let's move to the terminal. I started a Spire server and an agent in advance. As you can see, that agent is configured to use AWS secret manager and it's storing the secrets in this ratio. At the same time, that plugin is getting the access key ID and the second access key for my environment. We can verify that the server does not contain any entry. So I will start creating the first one. Here is an entry that we will be using for the deep client and storing that secret as the big secret. Here we can see that the selector contains the plugin we will be using, the variable that we will be setting as ID, that is the secret name, and the ID itself. It is possible to use the R if we want. As soon as I created, we can move to the logs. Here in logs, we will see that the entry was created, the speech is updated and it is propagated to the AWS. Here is the R of the secret where it is propagated and the secret name and what line we're using. We can verify that the secret was created. Just call it an ID. So here we can see it was changed now. Here is the name and an important part is that we provide a text. The text is spiral speed. It is useful to define what secrets are managed for Spire and will be kept updated. Now we're creating another entry for a second function we will be using. This function is called with client and I choose to put the second name with secret. At the same way as we did it for, as soon as I created the secret is updated on AWS. But something important here is that the third word is the same way that for log AP it works. But that means that any time that it is updated, because it is rotated, because it is rotated or it changes and force the secret to be rotated. And it is provided to AWS or any environment we want to support. So for example, I can update that entry I just created and I will add in a federation relationship. Here we see it is updated because the relation changes and it has a relationship. The same way the secret is updated and it is propagated to AWS. We can verify it the same way we did before just calling the word secret. Remember the time I did it and here I got. It is updated just right now. It's the same time estimation and here is the name. Okay, now the Spire is done. Let's start the function. The function consists in two things. As I mentioned before, an extension that is the one that is fetching a secret from AWS and the function. As I mentioned before, the function is very simple that we use to just get the switch and return it to the caller. So let me just deploy it. This is building an extension, building the function, publishing the layer that the layer is the extension I mentioned before. Here we have the Spire extension that is the name of it. And it's combative with Python that is in our case. Right now it is played in the DB client. Here we have the DB client. It is important that we provide the secret name as an environment variable. This is the secret that it is going to use. And here we have the layer that we'll be running before. In this case, the Spire extension. And now it will be creating the web client. That is the very pretty the same. We have the client that have a role and have access to secrets that have its own environment. And this has relationship with the Spire extension. Okay. Now that this is done, I will call a very simple function. Basically it's going to call the function, storing the response, and passing the response to get the certificates and the specific ID it contains. So let me call it by name. Here we have the specific ID, what it means. I will call use and get the certificate it's provided for the extension and returning to us. And as a result is that we have the specific ID we went in for. Only for this demo process, I add some logs. That returns the certificate, the keys and the bundles and more information on this on logs. It is not a normal case just for this case. So I will get logs. Here it has a specific ID we went in for, it's expire here. The certificate, it contains the change, it contains a provide key, it contains the bundles. The same thing will happen if I invoke the web client. But there is little difference. Here we can see the web as I expected, but the difference is in the logs because remember that for that entry, we add a dedicated bundle. So here we can see that client is getting the dedicated bundle too. And of course it's getting the specific ID we were searching for and the certificate and the provide key and the bundle. So that's all. I hope you enjoyed my demo. I think that's useful. See ya.