 This is Roy. I work at Square on the Cryptographic Identity and Segrage Management team. Today, I'm going to be talking about our AWS OADC authentication structure using Spiky at Square. So we're going to be going over what is OADC and why we're using AWS, the architecture, our solutions, and the current status of OADC at Square. So start with the overview. Some background about AWS at Square. We're in the process of transitioning to the cloud with a focus on AWS. It's been a slow process. There's lots of apps still in the Square DC. These DC apps will often use services in AWS, such as S3 and SQS. Apps in AWS live on separate accounts. So we have isolation there, and then we also have isolation in terms of the environment that the apps are on. So staging of production will be on different accounts. So what is OADC? OADC is an open standard and decentralized authorization protocol. It allows third parties to verify the identity of end users, thus though, by sending data about users in Jots, which are signed by an authorization server, which in our case would be spire. So there's no need to use a separate identity and separate authentication for every third party you try to connect to. So we're going to be talking about a small subset, this OADC spec, which is the discovery provider. The discovery provider exposes JSON Webkey, sets, or Jots for Jot validation. So there's two endpoints that the provider has. It has the well-known OpenID configuration and the keys. The keys is pretty straightforward, it just serves the Jots. The discovery document contains metadata about the provider, including information about the issuer of the Jots, the URI where the keys are served from, and the algorithms used to sign the Jots, as well as the response types. So this is an example of a Jot we would have at square for OADC authentication to AWS. In the subject, we have the spiffy ID of the app. In the audience, we have the intended AWS account. This is like the AWS account that is supposed to receive this Jot, and then we have the issuer at that time. So we had three options when we were choosing what to put in the audience field. We could have had it be the intended app. So if my app was trying to talk to test app or something, then the audience would have been test app. The problem with that is that it doesn't differentiate between environments, it wouldn't differentiate between a staging and production version of test app, more of a dependency. And so we went with the AWS account and at square we separate each app and their environments into separate accounts. So this works for us. A third option would have been to isolate it further down into the role that we're trying to assume in that account, but there's a limited number of audiences that you can have. So we chose the middle ground of just the AWS account. So this prevents this Jot from being taken or stolen and used to impersonate the app or my app. So this limits the scope of where the Jot can be used. So a quick overview of how OIDC and AWS works. AWS IAM supports using OIDC identity providers. So these are tied to provider URLs and then the roles can have a trust relationship with these providers. So it'll check that the providers coming from the correct URL, it checks metadata in the Jot and we check for whether the subject has the data specifically ID that we expect and also the audience is the correct one. The AWS STS allows us to use these Jots to assume into a role. You know, the role through IAM checks all the information that we need. So this is kind of how we were connecting to AWS from the DC prior to OIDC. App owners would create an AWS user attached to the policy to that. And then in the AWS console request the user's access keys, record those keys in an AWS credentials file typically on their laptop and then upload that credentials file to our secret store, KeyWiz. And then DC apps would be able to fetch those secrets out of KeyWiz and then use them to run as an AWS user. So this process is pretty manual. There's a lot of steps involved. There's a lot of places it could go wrong. There's a lot of places with keys could leak. And it's very unlikely that a app owner would go and refresh those user access keys. So we've had like keys that, you know, have been haven't been changed in like years. So the new method, the OIDC method involves a lot less work from app owners. So we've created a Terraform module that automatically creates this role and sets up the proper configurations for the identity provider and attaches an AWS policy to it. And all the app owner needs to do is, of course, implement that Terraform module or use that Terraform module and then add a configuration to their app. We call that the P2Manifest. So all you need to do is add a couple lines to their manifest. In this case, you can see here is an example where we specify the role that we're trying to assume into and then a name for that role. So in this case, extra role. And this is like basically the, if you were to have credentials files before, it's like a profile name. Yeah. This is all that the app owner needs to do. The rest of it is automatically done by our OIDC architecture. So let's go over how that works. So the broad overview of all the moving parts and then we're gonna break this down into smaller pieces. So we're gonna start with the P2 hook. So P2 is like our container orchestration software. So at startup, we hook, we have a hook that reads the credentials file from their manifest or the configuration and generates a credentials file, a AWS credentials file from that that's stored in the app's home directory. And then when a DC ad needs to talk to AWS, they read that credentials file and then they're inside the credentials file, we specify a credential process, which is one of the ways that you can fetch credentials when you're trying to talk to AWS. And the credential process is actually a open source tool we've written called Spifiatives Assumeral. That tool requests a jot from your Spire agent, which is signed by the Spire server and then sent off to AWS where it is verified against the cached jocks that we have in S3 with a CloudFront domain in front of it. So how do we get those jocks into S3? We have a cron that syncs the results of the Spifi OIDC discovery provider, which is provided by the Spire implementation to S3. We'll talk a little bit more about that later, but yeah, that's basically how the architecture works. The Spifi OIDC provider fetches the information from the Spire agent and we sync that to S3. So the Spire OIDC discovery provider, and it's provided by Spire. I can put it on the link for you to check out. It serves the jocks and OIDC discovery document. In our case, we serve the endpoints through an on-voice socket. Now I'm gonna talk about some tools that we've made to enable our setup. So we go more detail about the P2 hook and how that works. So again, this is a sample configuration that would be in their P2 manifest. So we have the name of the role, the Spifi OIDC test, and then the role itself, they're trying to assume. So and then below we have what the line would be, what the block would be in their credentials file. So the profile name would be Spifi OIDC test, and then the credential process calls out to the Spifi OIDC assume role with a few options. See here that we're assuming to this role, the Spifi OIDC test role with the Spifi ID of the app, Spifi OIDC test, and we specify the audience, and then the socket for the Spire agent. We also have options for specifying the region of STS and the endpoint of STS that you're trying to use. This is in case you're using a PPC. Cool, so we also wrote a cron job to sync the discovery provider to S3 buckets and so we did this because we had issues with exposing our staging endpoints to the public and AWS needs to be able to access those endpoints in staging. So instead of trying to expose our staging environment, we just uploaded everything to S3 and then used Cloud front domain. We also used custom square domains, which is possible using ACM certificates. And we did this so we had more control over the domains. This also provides us with like caching of the OIDC endpoints and more availability since Cloud front can serve it from more edge servers. So we pull the jocks every 10 seconds, just in case they are refreshed by Spire, we want to keep them up to date. And then the discovery document doesn't really change, so we just pull it every 24 hours. Cool, and then our open source tool, it's 58 hours of zoom role. I'll include a link for you to check out. The tool uses spiffy jocks to assume into AWS roles. So as you saw before, it's used with a credential process option and it calls STS assume role with web identity with jocks that we retrieve from your local Spire agent. So it supports retries, it supports logging, we have metrics, there's support from VPC endpoints and you can configure the STS session duration, which I didn't show before. So that defaults to one hour and it's customizable with this tool. So cool. So in conclusion, the current status of Spiffy OIDC. So we rolled out to general availability, we just needed to make some changes to shared libraries to use the potential spells in apps home directories instead of fetching it from KeyWiz, our secret store. We're currently migrating apps to OIDC, but it's been a pretty hands-off process. Teams have really been able to do it on their own. If everybody counted any major issues, mine of blips with, you know, we have issues with connecting to CloudFront and CloudFront fetching from S3, which we're working through, but it hasn't really been a blocker. Cool, and you wanna learn more about the security infrastructure at Square in AWS and check out our developer blog and the corner blog, including two. There's a more in-depth article about AWS OIDC authentication, some articles about how we manage secrets in Lambda and how we provide identities in Lambda. Thank you.