 Welcome to another episode of Authorization in Software, where we dive deep into tooling, standards, and best practices for software authorization. My name is Damien Schenkeman and today I'm chatting about all things authorization, bonuses, and the CIDR programming language with Emina Torlak, Senior Principal Applied Scientist at AWS. Hey Emina, it's great to have you. Hey Damien, thank you very much for inviting me, it's great to be here. I'm really excited about what we're going to be chatting about, but before we get started on dive deep, could you give our listeners with overview of your background and your current role at Amazon? Absolutely, so I'm currently a Senior Principal Applied Scientist at Amazon and an Associate Professor of Computer Science at the University of Washington. I work at the intersection of programming languages and automated reasoning, which is an area of computer science that's concerned with automatically proving correctness of systems such as logical specifications and code. When I'm wearing my professor head up, I work on a programming language called RZ. I've been developing it for about a decade or so, which virtualizes access to an automated theorem program. It makes it possible for people who are not experts in automated reasoning to build very quickly state-of-the-art verification tools for the maze that they care about. So some examples include building a verifier for the therapy control software for a radiation therapy machine in active clinical use at the University of Washington. Another example includes verifying just-in-time compilers that are embedded in the Linux kernel. When I'm wearing my AWS head-on, I code lead the development of Cedar, which is a new language for writing and evaluating authorization policies. We design Cedar to balance performance, expressiveness, and analyze ability, meaning the ability to read automatically about the correctness of Cedar policies. If you've attended ReadBend 2022, that's where it launched, is part of two other products, which is Amazon Verified Permissions and AWS Verified Access. This is amazing. It seems you've been thinking for a while about how to add verificability and testability to very critical things. Again, on the one hand, verified permissions. On the other hand, software for healthcare, which also needs to be fairly, fairly consistent and make sure that it does the right thing. It's great to see how these academic concepts are applying to more practical matters, as you said, and apply to scientists. You mentioned Cedar is a policy language. And what does that mean for people, maybe, that they are not familiar with policies? What are policies? What is policy-based access control, and how does it relate to authorization? So for policies, you can think of them as programs in the main specific language. They are very restricted class of programs. They take as input a principle that needs to be authorized, some action that they want to perform, the resource in which they want to perform the action and the current context. And based on that, the policy decides if the principle is allowed to do this. For example, a user in a photo sharing app might want to access a particular photo that was uploaded by their friend, whether they can do so or not, is determined by the policy that their friend set, whether you're allowed to do it or not. Policy-based access control, I have to admit, I actually had to look up that. That's not what we call it. I had to Google it. And there are different definitions out there, but they all get the same three concepts that are generally useful and that resonate with us. So the first one is, when you're using policy-based access control and particular authorization, you want to separate your code from the authorization logic. So this has two advantages. The first one is, if you want to change your authorization policies, you don't have to change your application every compile. You just change the policy and things continue to work and have a clear separation of concerns, your application logic from your authorization logic. The second thing that resonates with us in the various definitions of policy-based authorization is that when you externalize your authorization into a language, this language should give you flexibility to, and the right concepts, the right abstractions, to express the authorization notions that are relevant to your application. So don't be dogmatic. In some cases, it makes sense to give authorization based on roles and hierarchies. And in other contexts, we access to do so based on attributes. So really the ability to mix and match those freely is one of the things that we wanted to put into Cether. And then the third aspect which may be underappreciated and is extremely important is that whatever policy-based authorization we provide for you to build your applications with, the thing that really matters for your customers and users of your applications is providing the right UI. So there is no universal UI to an authorization system that works for all applications. If we're talking about a UI for somebody to use to authorize their photos or documents, you want something point and clicky, right? You don't want to write. You don't want to use this to write policies. On the other hand, if you're giving power to your system admins to write authorization policies, maybe they really do want the text interface and you provide them a nice language to do that in. So the thing that we were thinking about with Cether was that we wanted to provide a language that makes all of those things easier, right? So it's a layer that you can use from within your application to give a UX to the authorization system that makes sense for how your application is going to be used. Okay, this is very interesting and you touched on a few interesting topics and we're going to probably be able to dig into those over the course of the show. But essentially you said, okay, so policies allow you to define some rules for based on who's trying to access something and what they're trying to do and what they are trying to access. So it's like, who is doing it, what they're trying to do, what they're doing it with, and then some context which might be kind of like somewhat IP address or the specific permissions they have and a few other things, you're trying to make this authorization decision, which in some cases it might be based on someone's role, in some cases it might be based on other attributes and that's where kind of like you touch on some topics like attribute-based access control and role-based access control. And then you also said, well, we also wanted to make sure that whoever was using this as the end user was able to have a user interface that allowed them to understand what was going on there. Can you share a bit more about that because I don't think that's something that at least I've heard a lot when folks are talking about designing authorization languages. So the thing that we wanted to do with Cedar is to give you a general substrate that would make this easier. So let me talk about a more particular aspect of the Cedar language that makes UX building in particular easier. And that is the notion of templates that we had and that we have with Cedar. So you can write a generalized version of a policy where you're, let's say, leaving some aspects of the hierarchy unspecified. Then as your user is interacting with the UI and they're clicking where in the hierarchy they want to authorize something. So for example, my manager is allowed to see my employee records or something like that. The only thing that the application needs to do then is to turn these UI clicks into instantiations of this template. So that is one example, one feature that we built into Cedar that makes building UXs in particular easier and especially the kind of UXs that are pointy and clicky. Now we do have applications in which people just want to write Cedar. So one example of that is the AWS verified access where the people who are interacting with Ava are writing Cedar policies and that are essentially implementing a zero-trice system for accessing corporate applications. So you can do either thing, whatever makes sense. In one of the context of an application you want to expose the text interface in the context of another application it really should be a more pointy and clicky interface. That makes sense and it seems that this is one of those things that you did to make sure that you address expressiveness in the Cedar system. Whenever I see languages developed, particularly in an enterprise setting, it's interesting to understand more of the background I find out in the thinking process. So you said, hey, we're looking for performance, we're looking for expressiveness, we're looking for analyzability. What may you create Cedar? What were you able to do maybe with other tools and other languages and what made you say we're going to have to write a new one because we think this is the way to go? Yeah, it's a really good question and it's kind of an interesting story because we resisted building a new language for the longest time because doing so is just not a thing to be taken lightly. So for a very long time, customers were coming up and asking us for help. They were saying, hey, you guys built this IAM language and it seems to work really well for protecting AWS resources. We have our own resources that we want to protect and it's been a struggle. We really have that customers come and tell us we have tried to build our own in-house authorization system three times. We're terrified of it. It's hard to scale, it's hard to get the language right. Can you help us do it? So heading for this enough, we kind of took a look at the current landscape to see what's out there and whether we could just direct people towards an existing solution and say, go use that where 100% confidence is going to satisfy your needs. What we essentially found is that in the current landscape, you can place the current solution along an access that's actually, like I said, three dimensional space where in one hand you are considering the performance, how fast your authorization runs, can you actually bound the latency when somebody makes an authorization request. On the other hand, the other access, you can place the expressiveness on the language. Can I express everything that I need to say? Can I talk about role-based access control? Can I talk about ABA? Can I talk about other things? And then on the third dimension is, is it actually possible to prove properties of these policies? Now this last analyzability dimension sounds very academic, but it is something that we are currently doing under the scenes behind the tools that are very popular among IAM users in particular. So the IAM access analyzer and S3 public access. Under the hood, they're using a few remproper to establish properties of IAM policies. That is one thing that customers really wanted. They came to us and said, can you give us that? In whatever solution you come up with, we really want this functionality because it has helped us with compliance with all sorts of things. It really makes our lives easier. So on one hand, you have the languages that are extremely expressive and that's very good if you need to be super flexible. But the price of expressiveness, and this is true with programming languages in general, nothing specific to authorization, but the price of expressiveness is that the more expressive the language is, the fewer things you can guarantee or say about its performance. A classic example is, if your language includes loops, you can't even guarantee that a program terminates. And on the other hand, if you want to have really good performance, you really have to limit your language. Like what you can say in the language is pretty limited. And the third dimension analyzability is just like that. If your language includes certain features in order to be expressive enough, cheers as are, you won't be able to analyze it. In technical terms, it means that the analysis problem for the language becomes undecidable. It's not actually possible to write on an algorithm that determines whether a policy can do something or not. So what was really missing in this piece is kind of this point in the middle, where you try to balance these concerns, where you get just enough expressiveness, not super expressive, right? You just get just enough to kind of cover the basic authorization use cases, RBAC, REBAC, ABAC. And it's sufficiently constrained that you can put bounds on performance. You can say something about latency, how long a transition call is going to take. And you can also analyze these policies automatically. That makes sense. And I have a couple of questions about that. I think the first one is you took about performance, and you also kind of like contrasted that to expressiveness. So you said, hey, the more expressive something you said as far as this, you can make, which also factors into performance. When we talk about performance, are we talking about performance of the entire authorization decision? Or are we talking about performance of how long the policies take to run? And this naturally depends on what the policies are doing and whether they are doing a young another thing. So how do you folks think about it? So you're absolutely right. There are two dimensions. So the dimension that's directly under CDOS control is once you have done the hard job of deciding which policies to evaluate, and which data you need to pull in to evaluate them. Then we give you a balance on performance of how long this evaluation is going to take, how long we're going to take given all the data in order to make an authorization decision. And for some typical use cases with hundreds, thousands of policies and entities, our authorization latency is less than one millisecond. So it is super fast, super cheap. But like you pointed out, there is a much bigger problem surrounding this inner problem, which is fetching the policies and fetching the data. In the case of CDER, that is something that the applications or services that are built on top of CDER are responsible for doing. So our interface is really stops at the authorization and the evaluation layers. So after you have gathered all the data and you give it to us, we're promising to come back in the typical case and under music. That makes sense. Yeah, it's usually hard to make commitments for things that you don't control, and particularly in very, very complex systems. The other thing I was curious about is you mentioned some things about being able to verify what your policies essentially allow you to do, what things will or won't happen. This seems very interesting because the industry talks a lot about SIDE, which is like security information and event management. And essentially, it's after the fact, figuring out what things happen and maybe doing anomaly detection and who might have access things that they shouldn't have, making sure that you're very responsive there. But this seems to be more preventing and making sure that things that shouldn't ever happen don't happen. How did you folks come to think about it? Because as you said, one thing is, hey, customers came to us and said we won't help implementing these capabilities. But another very different one is learning, hey, these customers actually need this capability, even though they might not have asked for it because they might not know it's even possible to do. IAM has already been through this process. And through this process, they decided to build the IAM access analyzer, which tells you in simple terms what your IAM policies allow. And you can write scripts to process the data from IAM and decide whether you have any security holes in your system or not. Same thing for S3, what public access. So the basic, you know, within that context, the basic property is whatever I write with my policies, I really shouldn't make my buckets public, right? So that's another thing that we can check for you automatically. So in the context of Cether, it's actually a good question on what it means to verify Cether policies. So the thing that both IAM access analyzer and S3 public access have in common is that under the hood, they are both using the same engine, which is called Zelkova. And it takes its input policies in IAM and translates it to logical formulas, which it then analyzes. So we have the same engine, so you know, call it the Zelkova equivalent for Cether. But the really good question is, how do you expose this capability to customers? For IAM, it makes, you know, you have the main knowledge to use that makes sense for all IAM users. Like, you know what an IAM bucket is, you know what an arm looks like in Cether, there is no such thing. Exactly because you bring your own resources and your own property and your own attributes and so on to the table. So the question is, what are interesting properties to check about Cether policies? We are, you know, talking to customers and figuring those things out. But to give you an example of, I think that you could check that people have been pretty excited about and have been asking us for is checking the equivalence of two policies. So imagine that, you know, you had a deadline, you wrote a bunch of policies, you tested them and they kind of seem to be doing what you want. They seem to be working there, like 10 of them and they're super nasty and ugly. So the deadline passes and you want to refactor them, right? You want to rewrite them so they look nice, they easy to audit, they easy to understand and you get it down to three policies. Now, how can you be sure that these 10 policies that you had and these three beautiful policies that you've written are actually doing the exact same thing? Okay, they will disagree. They will agree on every single request that comes in. If one says yes, the other one is going to say yes and vice versa. So this problem, equivalence checking is something that we can do with automated, the kind of automated reasoning that we have built in this year. That makes sense. So that also goes to what we were saying earlier, rather than sampling for a number of very large combinations for inputs, you can actually kind of like mathematically verify that these two things are equivalent, which kind of gives you much bigger safety. That's very interesting. I also, you also mentioned things around when you give us your model by that you're expressing your authorization, your authorization model, we see that and that's also what makes some of these things challenging. We were facing some similar challenges while developing some of our more localization solutions with like OZFGA, which is you're telling us all of your nouns, right? You have your folder inside, you have your files, but we don't know what they mean, and they might mean different things to different companies working on these projects, even if the noun label is the same. Exactly, exactly. So that's really one challenge of building generic tools. Tools that people are supposed to customize is then it becomes difficult to provide universally useful tooling and environments and ecosystem for that. You have all of these interesting usability questions that come up. What are the properties that every see the user is going to care about? So, equivalence might be one of them. Maybe that's not the first order concern, but that's one of the things that we really look forward to finding out as more people use the language and interact with it. So you've been talking about bonuses, very high probability, performance bounds, being able to know that what you have there has some guarantees. What was the process of developing seed and like to achieve those properties and what tools did you and the team use to get to those results? So one of the things that we have focused on from the beginning, in addition to performers, the thing that we really care about is safety. So you are putting seeder on the critical path of applications, critical to security, which means that we want it to be really sure of two things, actually three things. So thing number one is the, which I'm going to jokingly call the human computer interface, which is we have to write this data specification language in a way that is completely clear so that you as the human, when you're translating your intent, right, whatever is in your head into the policy language, unless likely to make mistakes, right, you have to understand what the specification is actually saying. And it had to be unambiguous. Okay, so this is why we settled on writing a formal specification for seeder. And I will, I will, I can talk more about what that actually means. The second one is, once you have this formal specification written down, there are properties of seeder that we wanted to prove. So we wanted to be 100% sure that those properties hold about the design of the language, okay, the specification. So one example is very simple and you really, and you know, that better hold than any authorization system. And that is the default behavior of the system is deny. Okay, if there's no explicit permission given to do something, if you're asked whether you can do it, the answer shouldn't be no by default. We can prove that. In addition to those generic properties that have to be true about all authorization systems, we also have properties which are unique to seeder and related to performance. For example, the syntax of the seeder language is designed in such a way that there is a part of the policy, which we call the policy scope, such that if you use the information from the policy scope to do indexing and to retrieve policies based on the contents of the scope, you are guaranteed to pull in all the policies that you need in order to make a correct authorization decision. Okay, so your mental model when using seeder is if you have a million policies, all of them get evaluated to make an authorization decision. That's your easy mental model as a user. Of course, this is not going to happen in real life, right? Somebody has to select a subset of those policies that's actually relevant to that request and evaluate only those because otherwise it wouldn't scale. So how to select that subset is a problem that we call policy slicing, and we wanted to prove that the policy slicing algorithm that we proposed based on the syntax of the seeder language is solid. So you're guaranteed to put enough policies. So that's another example of a thing that we wanted to prove about the design of the language, so the actual spec. And finally, the third thing that we wanted to ensure is that what we actually implemented in Rust, so this is how we get the good performance, our implementation is in Rust actually matches the language spec, right? So if you have a clear spec and you prove things about it, but who knows if your implementation matches it or not, we wanted to find a bridge that gap as well. And we do bridge that gap with a technique called random differential testing, which has been used very successfully to test compilers. So it was pioneering the context of testing SQL pilers. And that is what we used to establish the correspondence between our Rust implementation and our formal specification, which is written in a language called Daphne. Okay, so there's a lot of fun back here. Let me try to go break that I'm not really correct me if I didn't get anything right. So you folks said, hey, we're going to start by formally defining the language. That means no coding, no implementation. Let's just make sure that the rules for the language, the grammar, what is valid to express is can be mathematically expressed and we understand what we might be able to do. So let's step one conceptually and with these quotes on paper. Then you said, okay, I'm going to start using Daphne to guarantee some of the properties for that language specification. And this is where you started going into go like that, hey, default deny, make sure that that's a thing. But also the rules around scope, right? So scope is if you specify if you see that your principal, your action and your resource kind of like I said, I'm going to use quotes so no one can see them a pre filter. Then that's what allows the Cedar engine to figure out what bonuses absolutely need to run in order to make that authorization decision. And that's kind of like one guarantee. And once you folks had the specification, once you have the Daphne tests, making sure that everything was working, only at that point you went on and said, okay, now we're going to go write some RAS code, which is to probably also guarantee performance upper bounds or no garbage connection, making sure that that you folks know when memory is allocated, etc. And is there a kind of like a feedback loop here or like Daphne verifies RAS? How does this work? Yeah, yeah. So I think I think you had the summary correct, except that present was a little bit more current than that. So it wasn't, you know, it was a wonderful model, it was more iterative, we were developing the Daphne and the rest at the same time. So the cool thing about building our specification in Daphne is that Daphne is both a theory prover and a full fledged programming language. So when we say Cedar formal specification, it sounds like you wrote a bunch of math formulas, we actually wrote an interpreter for Cedar. So our specification is a reference implementation. The difference between that implementation and the rest one is that the Daphne one is focused, we focused on being, on it being very small and very readable, we didn't care about performance at all. So we use crazy features of Daphne like set comprehensions, right, to implement code, and this is not something that you would use in a production implementation, you know, in order to pre production limitation to run fast. So basically, we built Cedar twice, once in Daphne, you know, as a server as functional program, effectively, and once in Rust. And we prove properties of this functional program in Daphne. And then we have a system we actually use cargo fuzz, that's built on top of cargo fuzz, that generates millions and millions and millions of inputs. So millions of Cedar policies, millions of inputs for these policies and millions of entity stores. And then it feeds them to both implementations. If they agree, we're good. If they disagree, we found the bug. Okay, so maybe it was a bug in the reference, maybe it was in the implementation, then you have to examine it as a human being and figure out who was right. So there's definitely a feedback loop there. That's, that's very neat. And how the end users or, I guess, developers or ID admins, whatever you folks think of an end user, get to participate in this process of making sure that it's not just that you can prove things out the language that performance is bound, but also it's readable, it's understandable, it's something that folks can write and iterate on that code. Right. Yeah, that's a good question. So, so far, you know, we haven't been we've been using the Daphne model as, as, as our source of truth for the development internally. So for example, when we, when we talk about things like building a type checker for Cedar, so it turns out that Cedar is dynamically typed, but it has an optional type checker. So if you tell us the schema that describes the shape of your data, okay, so who can roll up to who in the hierarchy, what attributes you have, what are the types of those attributes? We have a type checker that can shift this for you. So one thing that you want to prove about type checkers is soundness. So that means, for example, that if the type checker comes back, and it says your program is correct, it means that when you run it at runtime, it's guaranteed not to throw any type errors. Okay, so the evaluator is not going to be trying any type errors. So this kind of property is again, every major component in modeling both in Daphne and in Rust and did the same process, proofs differential testing and so on. And this is what we have been using the models for so far. You know, a super kind of nerdy academic thing that I would eventually like to do, you know, in my copious spare time, I think it would be super cool if we could generate English specification from the Daphne. And you know, right now they are both done independently. Okay, so we write the Daphne, then you know, we talked to our tech writers, and we figure out how to translate it to nice English, but I will be super cool if we could just take the formal spec and generate the English that people can read and understand from it. That would be neat. I'm sure someone will figure that out with generative AI, I know that it has a lot of all the hype. You mentioned dynamic types with the possibility of doing kind of like static, static specifications. You get you mentioned, for example, the scope feature. How did you make sure that end developers, that people that weren't going to be using Cedar to express their business policies understood the language they were happy with the feature set? How did that work? So one thing that we have done throughout the process of developing Cedar, which has been extremely helpful is first, you know, when we first started designing it on paper and writing little prototypes that are now in kind of, you know, some dusting every point who knows even whether they work or not. We made sure that throughout this entire process, we were talking to both external and internal teams within Amazon who came to us originally and said, hey, we had this authorization problem, can you solve it for us? We would write a prototype, go to them and say, hey, this is what we're thinking right now, does this look like a good idea? And they would say, yeah, this is okay, but this part is unacceptable to me. So one interesting story is, and one question that people often ask is, well, why is Cedar actually dynamically typed? Why are you not enforcing static typing discipline from the start? And the answer is that's actually how we started. So our original very first, you know, version zero draft of Cedar was strongly statically typed. So thank you, almost has to like healing that type of super strict. And we took that to potential customers and they said, I can't work with this. And the reason is very simple that the authorization data that they work on, they don't control it comes from third parties. So they don't necessarily know the shape of the data in advance. And it means that we have to allow them within their policy to write very dynamic thing, like if this policy has the name attribute, then compare it to the string Amina, right? You can demand that they have the name attribute and you can then be in that they even know the data of the policy, the shape of the data they're operating on. So that's how we decided to make typing optional. Okay, so Cedar is dynamically typed by default, the semantics is specified in that way. But then other customers came along and said, Hey, well, I actually know the shape of my data. Can you help me actually use this to make sure that the policy that I read or see that I'm not writing typos and I'm not accessing an attribute that doesn't actually exist. And that's how we built this optional type system. If you have the schema, if you know the shape of your data, you get this extra security and the extra safety. That's neat. You mentioned GitHub repositories. We were talking about developer feedback. How many folks consider making Cedar open source? Yes, we have. You know, I'm pretty excited to say that we just released Cedar at the Linux extension open source summit. And you can find it on GitHub at Cedar dash policy. Wow, that's that's amazing. Congratulations. Thank you. How did you come to this decision? You know, obviously, I'm super biased. So take that with a grain of salt. But we think that the security and performers properties of Cedar and the way that we built it make it a pretty good language for many of those applications. And we really wanted to enable the broader open source community to build on the work that we've done so to benefit from it, to extend it, to come up with cool ideas, because basically, every time we have extended the reach of Cedar when we included more people, we have gotten we have gotten invaluable feedback that has inevitably made the language better, you know, incorporating it made it more safe, more secure, more usable. So, you know, there is the community building aspect, but there's also the selfish aspect before, because more people keep the tires, the better it's going to get. Yeah, that's one of the nice things about communities and open source. You get feedback that I think it's typically a lot more organic, a lot more natural than if you just have kind of like a close product, because the use cases are different, because you can kind of like peek under the hole and do things that maybe you can't otherwise. Are the Rust and the Daphne implementations going to be open? How's that going to work? Yes. So, we're making everything open. So, all the Daphne code is going to open, it's going to be open all the Rust code as well as the framework, the differential testing framework that we're using to establish the equivalence of the two. We feel that this is really important because it builds trust with customers. You can not only analyze our code, which is of course the whole point of being open source, but if you look at our proofs and you can look at the statements of the properties that we proved, then you know, convince yourself that it's a property that you care about, it's a good one, or maybe if it's not, you can come back to us and say, hey, I actually really care about this other thing. Does this other thing hold about your language? And if it does, we or you can write a proof about it. That makes sense. The tuning for some of these things might not be something that folks use all the time, but how do folks think about the community contributing? What would that process be? For example, do you need to send a PR that passes the spec and also runs the part of steps and is there an environment where people would be able to quickly try this out or set up the repo locally? Yes. So, setting up the repo locally and kind of trying out the Cedar and the demo apps, you know, that's all easy, or at least we have tried to make it as easy as possible. But this point that you make about people not necessarily be familiar with verification and DAFME and how is that going to work in terms of open source contributions, it's a very good question, right? Because it is an unfamiliar development model and extending Cedar in that sense is a process that involves updating both promo specification and the code. So, the way that we decided to do this is having a similar RFC process to what other open source languages do or like Rust. So, if you want to contribute something that's not a core feature that doesn't need verification, that's easy, right? You open a PR just like everybody else, you know, you get code reviewed. If not people like it, we call it in. So, an example of that could be a side card of validation for Cedar, right? But if you need to change or you want to change, you want to propose a change to the core language, the core semantics, then you go through the RFC process and if it's something that the community agrees is a good idea and we like it even if you don't know anything about DAFME and you just want to write the Rust code, that's possible too. We have expertise on our side to make the truth go through again. So, that's interesting. So, it would be a mix of hand holding port for features that folks are familiar with, then you'd have the option of kind of like the small PR menu item where you say, hey, this is small enough, it should be able to get in. And then for larger things, it might be that it can have more than or what a function like verification model where people can run it up on their side. That makes sense. What about the future? What are your plans for future development of Cedar and are there specific features or improvements that are covering that you're excited about? So, we are just at the beginning of our journey. We're just open source. There are a lot of things that we're excited about. Our roadmap is pretty open. So, I'll tell you some things that we have in mind and all of this is subject to change based on what people actually want. So, the nice thing about open source is if Cedar sounds like a good idea to you with something that might be used for your application, you had the option to influence it. So, do PRs, participate in the process, let us know. Things that we know for sure people want right now, for example, is bindings for additional languages. So, right now, it is easy to use Cedar if you have Rust code, also for Java. So, we release bindings for Java, but there are many more languages out there that people like and enjoy using. So, those are definitely on the roadmap. You want those additional bindings. Having a sidecar. So, a lot of these authorization systems work by having a sidecar and people are used to that. Tomorrow they know how to work with it and that's another thing that we'd like to implement. On a more kind of technical side, an idea that we've been playing with is doing some form of type inference. So, right now we have you have two modes, right? So, one of the modes is you don't have any types. So, you know, you just run everything at runtime and the type pairs you do with them, it's fine. The other mode is you know everything. So, you give us a full specification, this schema, and we check it. But there is a third mode where we're finding some customers exist in that kind of in-between area where they know some types and they don't know the others. So, then the technical question becomes, can we actually infer those? So, again, whether this is a good idea or not, it will depend on how many people use it, how many people are excited about it and in that sense our roadmap is really about figuring out what's most useful but people are excited about and engaging with the community in that way. Yeah, this is one of the great things about the community and being able to gather feedback, right? You mentioned, for example, that the sidecar stuff and I'm sure that's great to happen for, again, typical orchestrators like Kubernetes and then there's going to be, hey, I need to make HTTP requests before running my policies. Can you folks help me with that? I'm sure a number of projects will come from this. I know also that, again, this is being open source but you've been seeing CIDAR run in real applications for a while as part of AWS certified permissions. What is Amazon Verified Permissions and how does it differ from CIDAR so that people might understand kind of like how they work together and what one is and what one is. Yeah, that's a good question. So we've been talking about CIDAR so far and the best mental image that you can have about CIDAR is that it's a language implementation. It gives you a language and an API for evaluating policies in that language and if you have a very small application that's probably enough, right? So let's say you have three policies that don't change very often. You sort of in a file, when the application starts up, you load them up and then you just call CIDAR whenever you need to write a request, right? So this is a model that's completely workable for small applications or for whatever reasons, you know, you have to cache policies locally. Again, this is where you take the SDK model and run with it. But when you start trying to scale and a lot of AWS customers operate on a very big scale, then you source the problems creep in that CIDAR itself does not solve. So, you know, some obvious ones are where and how are you storing policies? Can you deal with policy governance, policy versioning? How are you evaluating these policies? How are you doing the policy slicing? So CIDAR itself doesn't do the policy slicing for you. It gives you a theorem that says if you do the policy slicing this way, things are going to be correct. But if you have a massive store with billions of resources and millions of policies, somebody has to implement a distributed database somewhere that actually implements this slicing algorithm. So all of those things is what ADP does for you. Okay, so you could think of it as a managed service policy store that stores CIDAR policies that versions then does governance all of that on your behalf. And then when your request comes in from your application, ADP gathers the policies that need to be evaluated the slice and answers the request for you very quickly. So now the very quickly becomes, you know, the question that you asked at the beginning, well, it's not going to be a millisecond any longer because ADP has to take some time to gather those policies. But again, it's going to be very fast because the people who are building ADP are amazing engineers that've been doing this for years. So they've really gotten the art of building distributed systems down. So that is the difference between ADP and CIDAR. Okay, so that makes sense. It seems like CIDAR is the language word you use to kind of like define your policies, ADP is the way in which within AWS you can decide to author them, version them, make them available whenever you need to run them and so on. What are customers doing with ABP and CIDAR? Maybe like, what are two interesting examples that people might learn about so that they know, hey, this is the kind of thing that I could be doing, maybe I didn't know about ABP, I didn't know I'll see that and I can get started with it. Yeah, that's a good question. So, you know, heavy issues of CIDAR is currently coming from ABP customers. And roughly speaking, there are three classes of applications that we're seeing. So one of them you can, we can call them consumer facing applications. So think about financial industry and they are building banking services for end users. In these sort of applications, they are interesting because they tend to be, the rules tend to be very ABAC heavy. So attribute based access control, because, you know, there is no natural hierarchy between banking users, right? So people want to do things like, you know, authorize legal dependents, signatories on the accounts, that sort of thing. So those policy tend to be very ABAC heavy. And then going back to that question of interfaces or UXs for authorization, these will be, you know, mostly these more routine clicky interfaces that use these CIDAR templates to make that go through easily on the application side and the side of the application code. The second class application that we're seeing are internal services. So, you know, we have some within AWS and some other organizations as well. But you can think of it as an organization has a lot of sensitive internal resources, thanks, you know, billing data, for example, that it needs to make available to employees applications within the organization, but in a very limited fashion. Okay, so the person who owns the data was to control the access, they determine who gets to call it, how they get to call it, how they get to use it and so on. These tend to be kind of a mix of attribute based and role based. So for example, you know, developers in a certain organization are allowed to access my billing data, but only if, you know, it's no longer the three days old, you know, that sort of thing. So that's the second class application that we're seeing for ABP. And the final one is kind of business to business softwares of service applications. So thank God, somebody building an HR application and then having other companies subscribe to this HR application to provide services to their own employees. And these tend to be more heavily role based, right? So managers is allowed to access their employees records rather than attribute based, but you know, it's a mix of all of those. So that is those are things that we're most frequently seeing right now. Yeah, I can understand that I can really do it. When you get into B2B with custom models per customer, that thing started to get a bit messy at all. And I'm sure a lot of the optimizations around see that and the scope start to make a lot more sense. Yes, yes, it's definitely a problem. And you know, once you've seen enough of them, you really get to empathize with your customers and how much they have to go to where they are trying to implement these things of their own. Yeah, that's, hey, this is amazing. It's been it's been great learning for about what you folks are doing, bringing great learning from you. I have one final question, which is, where does the name seed art come from? That's a good question. So there were some internal versions of the IAM policy language that started with the letters A and B. C was the next letter in sequence. The previous two languages were trees. So we decided to go with the alphabetical trees. Okay, it's always interesting to learn about those things. Like, how did this very, very big thing every, like maybe in 10 years end up being named? Oh, yeah, it was the first two were taken and they were all trees. That's, that's very neat. I mean, it's been great to have you. I really appreciate your time. I know, I know you're working a lot on the launch and making sure that things are great for making sure that the community can contribute to Cedar. It's been amazing to have you here. I learned a lot and hopefully everyone listening in has as well. Thank you so much for having me. It's been an absolute pleasure. That's it for today's episode of Authorization and Software. Thanks for tuning in and listening to us. If you enjoyed the show, be sure to subscribe to the podcast on your preferred platform so you never miss an episode. And if you have any feedback or suggestions for future episodes, feel free to reach out to us on social media. We love hearing from our listeners. Keep building secure software and we'll catch you on the next episode of Authorization and Software.