 Welcome to another episode of Authorization in Software, where we dive deep into tooling, standards and best practices for software authorization. My name is Damian Schenkelman and today I'm chatting about authorization and Topaz, T-O-P-A-Z, an open source project for authorization with Omri Kzith, co-founder and CEO at Acerto, the company behind Topaz. Hi Omri, it's great to have you here. Hey Damien, it's great to be here. Thanks for having me. Yeah, really excited about this. We usually start asking folks to share a bit of the sales, so could you give your listeners a review of your background and what you've been up to? Sure, yeah. I've been in the industry for now over 30 years, which is a little scary. I've spent pretty much my entire career building tools for developers. First at a startup where I was the founding engineer of, back in the very early 90s. I joined Microsoft in the late 90s, was very fortunate to help co-found the .NET project and later on the Azure project. Worked on a lot of the early kind of web standards, starting with SOAP and WISDOL and WS Federation. That's where I met Vitorio and other people that were working on the very, very early versions of what became all the interoperability standards like OAuth2 and OpenEdit Connect and SAML. And so on. Led the Access Control Service while at Microsoft as the general manager of that and a few other things. And left Microsoft because I was largely frustrated by how little we understood open source. I felt like open source was a huge factor that we just completely missed at Microsoft. Spent the next 10 years working on open source like OpenStack and Cloud Foundry and the Kubernetes ecosystem and Puppet. And then SIRNO is my third startup. Started that in late 2020. That's nice. So you, as you say, you've been doing that tooling. You've been doing security and identity and you've been doing open source and this is a big reason why it's great to be chatting with you and we're going to dive deep into this topic. Let's get started. What is Topaz? What are we going to be talking about? Well, Topaz is an open source authorization engine that basically combines some of the really interesting good ideas that we've seen over the last maybe let's call it three years in the authorization space. The first one is this idea of policy of code and the best representation of that today is the open policy agent project out of the CNCF. It's not the first one. That's attempted to do it, but it's kind of like the largest open source ecosystem around that idea. The second idea is the opinionated line created authorization model that we all write about in the sense of our paper. And then the third idea is really kind of the idea of doing authorization in real time. That is rather than trying to rely on scopes and access tokens that are minted at authentication time, the idea of separating authorization is a separate process that happens right before you want to grant or deny access to a protected resource. So those are the three things that really combine into Topaz. Yeah, those are some of the big trends in the industry, I think, but real time authorization is hey, you want to make sure that you have the latest and greatest when you make any decision. Give big part of that to your trust concept. And then the whole kind of like Reebok relationship-based aspect with Sunsibar and Policies Code, those are kind of like the three big trends that we've been seeing as well. And I think everybody working on authorization is kind of like closely following. Now, before we kind of like dive deep, you have mentioned a few concepts. Some of them might be new to folks. Some of them we've discussed in the past in the podcast, but when you say authorization engine. What do you mean? What makes an engine and why is having an authorization engine important? Well, an authorization engine quite simply takes inputs and produces an output. And the inputs are typically a subject or an object and some kind of policy or permission or some decision that you're trying to compute. And the output is, of course, a decision. And probably just as importantly, a lot of that decision. So an authorization engine takes typically three inputs, user policy or permission, however you want to call it, and the resource that you're trying to perform an operation on. And then gets back to stating an allowed or denied decision. Okay. So from the outside, it's kind of like these black box where I send it, who wants to do something, what they want to do and what they want to do it on. It replies whether they can't do it and essentially keeps it low and audit low of what happened, which is important for compliance reasons, security reasons and a few other things. And then you mentioned a few things like, hey, you can write these policies, these rules in code that allow you to specify who can get access to what. So that's part of your authorization model. And then you mentioned this notion of Zanzibar and how you can use this relationship based authorization to make decisions, which we're going to dive deep into in a few minutes. But before that, what made you decide to implement Topaz? Like, why is this something that you say, hey, we need to do this? This is very important. Yeah, that's a great question. And I would say it goes back to our, first of all, we, you know, we have pretty deep open source routes like GoFounder and I, and we looked, when we first started a sort of, you know, kind of, we looked around, we could get to why we even got to authorizations a problem. You know, but when we looked around the authorization space, we saw OPA and we didn't feel like we needed to reinvent the wheel. There was an engine, you know, the open policy agent, it had defined the language, not perfect, but it's there and we felt like we wanted to make that the basis of an authorization solution for application authorization, right? Because OPA was used a lot in infrastructure authorization scenarios and we felt like there was a vacuum in terms of solutions for application authorization. And very quickly, we realized that OPA had a pretty good story for policies, but it didn't really have a great story for DEA. And, you know, for all of us that have been in this space for a while, we know that data is the problem, right? Because you're basically making authorization decisions based on data. And with OPA, you know, they're really two models. You bring the data to the engine when you make the call, or you store the data along with the policy. And we felt like, you know, neither model was very satisfactory, right? Because if you're storing the data with a policy, sure, if you're data, you don't have that much and it's slow moving, then maybe that's fine. But in application authorization scenarios, the policies evolve slowly and the data evolves very fast. So the cadence of evolution is very, very different. And so we realized OPA doesn't have a data plane. And we need to go build a data plane for OPA. And we built that authorizer and then, you know, as we continued down this journey, we realized there's actually a perfect data model for the data that we want to support in our authorizer, which is the triple store model, you know, that basically Zanzibar is based on. And so we decided to combine these two really good ideas. And we felt like there was no other open source project that combined these things in a way that was complementary. Oftentimes in this industry, you hear policy as code versus policy as data. And we felt like, hey, you could actually take the best of both of these things and combine them in an elegant fashion. You know, that's what we hope for. We've done it so fast. Yeah, yeah, I definitely agree. I think there are instances where having kind of like your policies expressed, like as code directly, like using a policy language, like it could be Rigo, it could be Sacramento, if that's your jam or any other things, like Cedar, which we did an episode on earlier this year. That's perfectly valid and there are scenarios for that. There are scenarios where being able to express things from a relationship perspective with the triple syntax that you're mentioning. We also did a couple of Zanzibar episodes with which we're going to link to. There are always ways in which like you can find when it fits and being able to combine these two together could be really powerful. But I think the key thing here is that as you're saying, the data is the biggest thing. And that's one of the main things that at least I thought of when I started looking at this problem. It's you have it's similar to kind of like MapReduce, right? You want to bring compute to the data, but you don't want to be moving data over every time you need to make a decision because these disauthorization decisions need to be fast. So it seems that's one of the things that you folks are going for. What are the other design principles behind Topaz? Like, what are the things that people will see and would be able to feel if they start using it? Yeah, the first one that you mentioned is a great one, which is we think that authorization has to be a local call over local data because you mentioned the exact architecture. I think I'm old enough to remember. In fact, my team actually participated in building the exact more spec. And the spec does a good job at laying out functional components. You know, so you have the policy decision point that is essentially what we call the authorizer, and then you have policy information points. The idea is that the policy decision point can reach out to the information point, grab some data that it needs for for evaluation and then, you know, make a decision that may be fine in the age of software. It ain't fine in the age of because, you know, basically these services that are downstream from you have, you know, availability and have latency. And so in the same way that you don't want to make an API call across the Internet for every authorization request, because authorization is in the critical path of every application request. You don't want to take 50 milliseconds, 100 milliseconds of latency going talking over the Internet just to find out whether the user is able to do the next thing or not. You also have the same issue with respect to the data. So we felt like the data has to be cashed. And so that's why Topaz, if you look at it under the hood, it's really a set of things. It's OPAS, the decision engine. We bring that in as a Golang library. It's an embedded store that's based on BoltDB. And so we basically store all the data and we demand load that data out of store out of that storage as it becomes, you know, as the policy needs it. So we basically have a database architecture within Topaz. And then it has a set of APIs, right? API gateways, GRPC, REST, and so on. And so, you know, basically making a decision call can be very, very quick. You know, we typically benchmark decision calls around a mill effect. So that's, you know, the kind of like the architectural principle around, you know, making decisions quickly with local data. And then the other ones I think we've already talked about. We want to support all manner of fine grained access control. So we want to support attributes and relationships. We think both of them are important. We think the policy is, it's important to be able to extract policy out of the application and store and version it separately. So we want to support that policy as code workflow. And then, you know, like we talked about, we have, you know, the real time aspect of it and the decision logs. Again, very, very important to maintain a log of everything that's ever happened. So those are the architectural principles that you'll find in Topaz. Nice. Yeah, I think one of the things you said reminded me of a conversation I had a few years ago with someone, which was when you draw something in the whiteboard and like you have arrows, what the arrow is, is very important. Is it a method called or is it a wire across the continent? Because those two things are going to make a big difference in whatever you're doing, right? It's not just theoretical. And I see a lot of that in what you folks are implementing. Yeah, I mean, like DCOM is case in point in that, right? So back in the 90s at Microsoft, we used out of this thing called COM, which was very successful as a way of sharing components, you know, between different applications. The idea of DCOM was distributing that very soon. We found out the difference between chatty interfaces and, you know, kind of well-designed interfaces, so nothing new under the sun. Yeah, yeah, exactly. It's like fashion, it's cyclical. You mentioned this notion of like fine grain authorization. You folks started doing what you're doing in 2020, so like three years ago. What's making fine grain authorization important nowadays? It seems kind of like it's a growing, fast trend. Yeah, I think there's a number of things that are kind of contributing to that. If you look at the macro trends, I think, you know, this notion of the perimeter, you know, kind of security is a thing of the past. And we have this like very overused term called zero trust, which I personally hate, but this idea that the burden of securing applications has moved from the environment to the actual application itself. But the applications haven't really kept up. And if you look at, you know, kind of like the 2021, you know, OST, top 10 broken access controls, number one. In 2023, they just published the API security, you know, top 10 and number one, number three and number five are all different versions, different forms of broken access control. So we know that this is a problem that's plaguing these applications. And then at the same time, you look at, you know, the breaches and, you know, that are happening and how devastating they are to those out, those, you know, those organizations, you end up with an imperative to kind of adhere to the principle of least privilege. And when you start looking into what that entails, it really means, you know, the status quo is not going to work anymore. This idea of over provisioned roles, you know, that you grant all these people and you do recertifications every year and it's a manual, air prone, soul crushing process. And people realize that there's got to be a better way. And for us practitioners, we like to say, we haven't had our OIDC moment or OpenID Connect moment for authorization. You know, there's this magical moment, you know, kind of about, you know, let's call it 10 years ago, where all of a sudden, you know, we had this critical mass of, you know, you know, of, of SAS applications that supported a standards based authentication flow. And we had a critical mass of companies like Auth0 and Octaves that provided, you know, kind of organizations with the ability of building that very cheaply and easily into their, into their organization. We just haven't had that yet. And we look forward to the moment where the same thing can happen in authorization, where we don't have like every application builds its own, you know, authorization system and every application admin has to manage any of them. You know, we look forward to the day where an end times end problem becomes an end plus end problem. So I think all of that is in its early days. But that's why I think there's a lot more, a lot more focus on this. Yes, I like how you kind of like use words that are more kind of like in the, in the air, sort of speak to explain this kind of surprise. So it's like, hey, this over provisioned roles that you have, that's how that would be the course frame stuff. And then when you get into the principle of least privilege, which is kind of like this file that's coming with, with the notion of zero trust, like that's, that would be very tied to fine grain, no more than, than you need. How does topaz implement fine grain authorization? Like again, this is the concept, but then what's your vision of how they should work and how are you making it happen? So I think basically we have, you know, a couple of, you know, kind of like ideas of what data looks like, right? So there are for every organization, there's going to be common data. So things like users and groups are typically shared across all the applications, at least for internal apps. And then if you look at, you know, each application has application specific data, right? And so we think that, you know, the right architecture is to be able to have shared, you know, kind of concepts of users and groups and the ability to assign them as subjects across any of these applications. And likewise, we think that, you know, a lot of these applications will have policies that are a little bit more involved than just a rebad policy. And, you know, you could have an application that's entirely rebad or entirely attribute based. But we find that, you know, most of the folks that we talk to, that may be selection bias because people find us that actually have more complicated bottles. But most people have this, you know, like this combination that they want to, you know, to implement that basically wants to reason about attributes about users, attributes about resources and relationships between them. And so we feel like a policy should be able to give you access to both, you know, these common things like users and groups, as well as application specific resources and the relationships between them. And so with Topaz, we've actually implemented, the store is not just a relationship store. It's a store that optionally allows you to go store, you know, like concrete objects. So users and all their properties and potentially even objects. Now, where the line is between the authorization system and the basically the line of business application database. That's a gray area. We like to tell customers, do not treat us as an OLTP store. You know, don't store all of your application data there. It makes no sense. Only store the data that you need for authorization purposes. And if it's just literally the key of the resource, then that's great. But if you need some attribute about the resource, for example, if a review has been submitted or not, that's something that's very easy to store inside of that, you know, kind of the data store. And you can then reason about not just relationships between a user and an object, like user and document, but whether that document was also submitted and make that, you know, kind of both conditions necessary for making an allowed decision. So that's really how we implement the best of both worlds, A-back and re-back in the same system. That makes sense. So let me see if I if I can map these notions to kind of like a company that's relatively large and made how kind of like a few services and a few apps. So if I understand correctly what you're saying is if you're working with different services, let's say you have a service oriented architecture to avoid the microservices term. Each team would conceptually write policies that have their own business logic. And these policies would be able to consume the users and the groups that are kind of like figured out through these relationships by like a user with also a group and a group with also other groups. And those would be more global, right? So it's not like each of those services would have kind of like data or notion of groups. It's more like shared user groups. And then the attributes and the policies are a pair of service. Yeah, so basically, you know, the subjects can be shared and, you know, an organization could define their own subjects to they could decide that their departments are going to be organized in a very particular way. And so an admin now has a hope to basically say, OK, for this particular application, the way that, you know, we're going to do assignments is we're going to do assignments to departments, for example. So for example, this folder, you know, like everybody in this department has weed access to this folder. And that way, you know, like you could basically have an organizational principle on the subject side. That can span multiple applications. And I think that's a big unlock in terms of, you know, kind of starting to make it easier for admins to have a single control plane for the subject in the system. Now, objects are going to be application specific, right? And so every application will have its own model. And for us, a model consists of both rules in the policy as well as, you know, essentially a manifest for our directory. So you'll define your object type, your relation types, your permissions in an application specific fashion. And so every application has its own version of that. But it can basically assign as the subjects. It can assign all of these kind of shared constructs that an organization sets up. And we think that that starts taking you down the path where instead of n times m, you can start, you know, kind of like collecting these common things. Well, how does an implement or a developer decide whether something should be an attribute versus a relationship? You mentioned the notion of a department, right? When is that department an object that is related to the user versus when is it an attribute of the user, user.department HR or something like that? Yeah, that's a fantastic question. And I think, you know, it really is for many of these things, it's a, you know, it's a it's a style choice, right? So many people that come from an attribute world would like immediately say, well, an attribute department, there's an attribute for us that have been in the relationship world, we think of that as a relationship. There are some cases where it's a little bit more clear cut. You know, so, for example, you know, there may be a set of statuses on, you know, a piece of data that you don't want to create a relationship for and have to move, you know, kind of an object, you know, remove one relationship and add a different relationship. Those tend to be like, you know, unions, you know, and software speak, so to speak. But, you know, like there are a lot of things that I think it's a judgment call. I think for things like organizations, you know, projects, teams, those tend to feel more like relationships. For us, managerial relationships are obvious, you know, like you have a user graph and, you know, one of the things that we focus a lot on is exposing, you know, being able to import data about, you know, not just users, but also management relationships, those are very, you know, kind of typically in, in policies, you want a transitive type of relationship. So if, for example, you know, kind of, you know, someone owns a document in a lot of organizations, their manager should have at least read access to that document. You know, that may be true in insurance, you know, kind of industry, for example, where a manager of, you know, kind of like a, you know, some insurance office has access to the policies that their reports are writing. You know, those are very standard types of things and those tend to be transitive relationships that are much better to express as relationships as benefits attributes. And then the last thing I'll say is there are some things that are really hard to do as relationships. For example, any type of calculation where you're trying to like figure out whether, you know, somebody has an approval, you know, limit, you know, above a certain number or below a certain number and compare that to an attribute of, you know, so I have an invoice and as an approver, I have, I can approve invoices that are under 50,000. You know, that is an expression. And so those expressions tend to be expressed more as rules as opposed to as relationships. Yeah, that makes sense. That's a very detailed answer. And I think in general, anyone in preventing authorization would benefit from that one. We've been talking a lot about relationships. The topaz documentation mentions kind of like this notion of I'm going to use their quotes although no one sees it, Zanzibar directory. We've talked about this Zanzibar thing in the past, but like what is the Zanzibar directory specifically in topaz, right? How is it different from Zanzibar on the paper? Why did you decide to make those changes? Yeah, so, you know, I think the idea of a triple source not new, right? So, you know, it goes back at least as far back as the RDF model, you know, and the semantic web. And, you know, so we basically set out to build something that expressed, you know, kind of like the ability to, you know, express a relationships between subject and objects and also be able to optionally store these concrete objects. So, you know, different implementations are different but we chose to, you know, have, for example, a table for objects, not just a table for relationships. So, in that way, you could say that, you know, we've extended upon, you know, the core idea of just storing relationships. A lot of, you know, kind of the core of the Zanzibar paper is, you know, kind of expressed at least in terms of the model. So, this idea that, you know, like our model underneath the covers is, you know, able to express different operators and things like that. You know, we've been in the process of implementing more and more of these operators but the idea of being able to express these relationships and evaluate them transitively, that's all in there. Some of the things that we chose not to focus on that are in the Zanzibar paper, most notably, Zuki's, you know, the, you know, kind of consistency guarantees. Not to say that that's not important for some subset of applications but just, you know, when we talk to our, you know, prospects and customers, their experience is so crappy today with respect to consistency, they basically say, look, here's the world I come from. I have an access token. The access token has a, you know, time to live of two hours and, you know, for the most part, you know, once that access token is minted any changes on the back end, you know, have to wait until that token expires. And yeah, we know that there are certain revocation but that's really state-of-the-art for us right now. So, you know, our latency between when something happens in the back end and, you know, kind of applications end up sensing it is typically, you know, kind of measured in hours, not in seconds. The fact that you can get to, you know, consistency in a second, that's amazing for us. So, you know, we found that, you know, while, you know, there are probably some applications that, you know, could benefit from, you know, kind of a total causal ordering between these types of, you know, events, even the folks that built Sandsabar, Agashek over at Google, you know, kind of famously came back and said, hey, you know, we kind of over-invested and invested in that mechanism and maybe under-invested in developer experience. And so, that, you know, does come up, you know, for people who are very, very familiar with Sandsabar. They ask us about, you know, whether we have support for Zuki's. And we say no. And, you know, that can be a differentiator for other people who've actually built that. But for the most part, we haven't found the vast majority of people, you know, don't care necessarily about that mechanism. Yeah, I think that it's good that you mentioned kind of like that, that I think it's an interview that Agashek did with Oso or something like that, where he shares that. Yes. And in general, I think it's because the paper mentions the word Zuki so many times that it became so relevant. And then, as you kind of like, the more time you spend thinking about the problem, the more you realize that if you can keep the system convergent in very small amounts of time, it works for a great, great number of problems. Another thing that I always think about, and this is a less popular paper. So, Facebook had this paper, I think it was like in 2013, about how that authorization system, which is kind of like this graph, and it's definitely eventually consistent. And they came up with this other paper, I think it was like eight years after that. So, if I remember correctly, 2021, call it Ramp Tao, where they added some of these stronger guarantees, not exactly Zuki-like, but so if it took Facebook those eight years for some of their teams to need it, it's that most folks don't really need it. Yeah, and we've talked to very smart teams that have read the paper and they've looked at our software, they were trying to figure out whether they're gonna build their own or kind of use something else. And they asked us about it and we say, well, we decided not to build that mechanism. And they're like, oh, okay, good, because we were thinking about building, but we didn't think we needed it. And we're just trying to figure out whether we're smoking something or whether we're, you know, and so there's definitely one product that I'll have to kind of give them the props here, SpiceDB that's implemented the Zanzibar paper, pretty much the letter of the law and it's improved upon it. And I think that's done great work. We just felt like that's not the approach that we're trying to pursue. We're not trying to be the most Zanzibar of all Zanzibar implementations. What we're trying to do is just be very pragmatic as a small team in prioritizing the problems that people are asking us about. And for us, for example, the problem of getting data into the authorization database is a much more pragmatic problem. And that's where we've decided to spend a lot more of our time than, you know, kind of in the consistency guarantees that frankly, you know, not very many people ask us about. Yeah, that makes sense. So talking about that, what data sources does Topaz support? How do you folks, first of all, decide what to prioritize? But also what do customers, what do your users typically ask about? So the most obvious piece of data is, or type of data is users and groups that come from identity providers and, you know, in the like, in directories. So we've basically built a framework we call DS load. It's a very engineering, you know, kind of type of name, but the idea is that we have a, it's actually an ETL type of process where, you know, you have a set of plugins that we've built, but you know, anyone can build that can extract data out of the, you know, the source. And then we transform it, you know, and we give you a lot of control over that transformation happens to be using Go templates. And then we load that into the directory and we've created a pipeline for being able to do that. And you can run that pipeline, DS load is an open source tool, but we've also assembled it, you know, in, you know, reasonable ways in our commercial product. And so we built, you know, plugins for, you know, obviously things like zero and octa and Azure Active Directory and Cognito and Google workspace and those types of things, you know, like Workday is a great source of, you know, data for relationships between individuals. And the idea is that you can basically, you know, kind of extract data from one source and also augment it with data relationships from other sources. So it's a pretty cool framework for being able to get the authorization data into the store where it's, you know, basically gonna be used for authorization decision. And most of the time, you know, folks tell us, you know, the authorization system isn't the source of truth for user information or group information. And that's pretty obvious, right? So it's going to be an extract database. And, you know, part of the magic here is being able to, you know, kind of wire into the various mechanisms to be able to, you know, kind of like use eventing systems to be able to push data down to from the source into the extract database that we store in Topaz. Yeah, that ETL process and particularly kind of like merging these different sources is probably very valuable. One thing I'm curious about is, I imagine deployments of Topaz have multiple instances of this running where like you might have different notes with their own database, BoltDB. Does the ETL process end up communicating with all of them? Is there some kind of coordination mechanism between these nodes as the ingest data houses work? Yeah, we're very flexible with respect to, you know, kind of how, you know, kind of Topaz instances end up, you know, kind of like reaching their data. So there's an interface, a GRPC interface between the authorizer and the directory. And so by default, you know, a Topaz talks to its local BoltDB. It can also talk to, you know, a Postgres backed directory that's, you know, kind of that you can stand up as its own microservice in your cloud close to, you know, where the authorization is happening. And then we have a control plane. And the control plane basically acts as a signal so that all the Topaz's that are connected to the control plane, as soon as new data is available, you know, or new policies, the control plane will send them an event over a Nats bus and the Topaz's will download new data into their BoltDB if that's how they're configured. So there's a lot of flexibility in terms of how you want to, you know, like the actual data architecture to work. For some organizations, they want to have all the data in their cloud and they don't want to have it leave at all. Other organizations want to have the data sexually kind of being, you know, kind of sourced in the Topaz, sorry, the sort of directory. And then, you know, they'll want to make changes, you know, kind of add, you know, remove objects, change objects, add relationships and remove relationships in one central place. And then they want our control plane to distribute all of that to all the Topaz's authorizers. So it is truly an eventually consistent model. And if people value consistency above all, they can basically have a single relational database backed directory that sits on their side that, you know, they can scale out, you know, and then, you know, all the Topaz's are basically stateless with respect to that. So, you know, both models are supported. Yeah, I see that, that makes sense. So it's kind of like a spectrum, big Euro and adventure kind of thing. Yeah, very important here, you know, in terms of architectural flexibility, I think is very important. We found that different types of organizations prefer different things and we've tried to be as flexible as possible in terms of making these things into components that they can just assemble in the right way. Yeah, and authorization is very involved in the architecture of the systems, right? It's not like this external component that there's one flavor that fits all the, you really need to adapt to each of these architectures. We've been talking a bit about how to get data into the system. What about the data that goes out of the system, particularly in this very valuable outputs of every authorization decision? Where can you send them? How long does it take? What information is available there for every Tope as long? Yeah, so like you said, outputs are just as important as the inputs and the decision log, the decision log stream is very important for things like compliance and forensics. And so we basically, by default, Tope As will collect decision logs but store them on a local file. And then if you connect the Tope As instance to the control plane will basically queue them up, you know, using exactly once in order semantics, again using that and push them up to the control plane where the control plane will basically, you know, aggregate them together per policy. So essentially every policy which corresponds to every application will have a complete stream of the decision logs. And we offer streaming APIs and batching APIs to be able to get that into your logic system. So for example, we have an elk integration where you could basically set up a log stack pipeline to be able to tap into the stream and then get the data into your elk stack. And how does Tope As typically integrate with the systems architecture? How is it deployed? Is it a Kubernetes thing and do you have templates for it? Does it run as a plugin in the cloud environment? What are the flavors that folks have a way to learn for them? Yeah, so the most easy to consume is just a container image. So Tope As comes as a container image and we have a CLI for it that, you know, kind of basically makes it easy to obtain it and then run it, start it, stop it, configure it and things like that. But ultimately Tope As is a binary. And so we compile our binary, it's a Golang binary so we compile it for, you know, the X64 and ARM64 Linux, you know, Darwin and Windows flavors. And so you can, and we have customers that are actually just running the binary on a VM because they have that kind of environment. And then you could, you know, but most people are essentially running Tope As either as a sidecar or as a microservice in their Kubernetes cluster. That's the most prevalent deployment architecture. Yeah, that makes sense. What about teams that don't want to run this themselves? I guess some folks are like, okay, can someone let's do it for me? What alternatives are there? Yeah, so we believe that, you know, the right architecture is to run the authorizer locally and then to have us run the control plane. Some customers wanted to run the control plane or portions of it on their side of the cloud. And like I said, we're very flexible with respect to that. There are some people that love the fact that we have a hosted authorizer that we make available for every policy that you basically, policy instance that you create with us will automatically spin up a hosted authorizer. So it makes it super easy to get started. You know, you could literally just start interacting with an authorizer that's running a policy without having to deploy anything. And so, you know, kind of getting up and running for a developer is, you know, like five minutes or your next one's free, as I like to say. You know, but ultimately, like we don't know of any organization so far, at least of our customers that has decided to run their production workloads against our hosted authorizer. Now that's not to say that we don't have good availability, I mean, we, you know, we have probably we've achieved around 99.95% availability over the last year, you know, for our hosted services, but we still think from a latency perspective, most applications are latency sensitive enough where they want the authorization budget to be measured in milliseconds, you know, as opposed to tens or hundreds of milliseconds. And so that's why we've made the, you know, architectural choice to strongly advise our customers to deploy the authorizer close to their app. But the control plane, we can operate completely, you know, for you as a user. We have some organizations say, look, no data can ever leave our, you know, our premises. We want to run an area of the environment. We have our own container image, you know, registry. We have our own, you know, database systems. Everything has to run locally and fortunately the system is built in a way where people can go deploy that as well. That makes sense. So if I get that, you can either host everything yourself, both topas, data and control plane. You can say, hey, I want to run topas in my environment and then use asserto as a control plane, the cloud service. And then you could say, hey, I want to run everything with asserto, whether it's, so you essentially, you could manage things on the customer cloud completely, correct? Yes. And beyond what topas provide, are there things that asserto, the cloud products gives customers that they wouldn't have available if they just went with the open source merger? Yeah, I think basically the deployment flexibility that I talked about. So the scalable Postgres back directory, you know, that's something that we provide for commercial customers. We have deeper integration with data sources that is built around, you know, kind of like that signaling mechanism that I talked about. So the idea that, you know, new policies are, as soon as they become available in a policy registry, they automatically get synced to the topazes. That's something that the commercial product provides that's not available in open source. In open source, you just basically have a timer, you know, and you can configure that timer, but it basically will go pull the registry, the container registry to find a new policy image and when it's available, it'll go download it. So I would say a lot of the optimization work that we've done in terms of signaling, handling data, collecting decision logs, aggregating decision logs, the policy is code workflow. So we have an automatic way to, you know, kind of like help you create a GitHub or a GitLab repo and install a set of actions that builds a policy into a container image, push, you know, scientific pushes it to a container registry and then all the signaling mechanism. So all of that stuff is in the commercial product, the decision log aggregations, the commercial product. So you can completely use topaz standalone and it works like awesome. It works like a champ and a lot of people do. But then there's a, you know, a lot of like, you know, kind of heavy lifting to do to actually, you know, kind of make that scale for our organization. And we think that that's where our commercial product can really come in handy. Yeah, that makes a lot of sense. And this is where a lot of products in the space are, right? Like there's some version of open source and then when you figure it out, either the deployment model or hard to get data in and out of it and kind of like a lot of this, similar to syntactic sugars in languages, right? If you want kind of like the niceties, you typically go with the hosted version and it makes things a lot easier for customers. Does this guys win a great deep dive? Omri, I really appreciate your time. Is there anything else that you would like anyone listening to know? Well, first of all, I really enjoyed the conversation with you as well. I mean, it's awesome to be able to talk to somebody who's so deep into the space, you know, having founded, you know, your own project there as well. You guys have done some awesome work with OpenFGA. And you know, I think we're all learning from each other and we all have one shared goal, which is get this, you know, sorry state that we find ourselves in with respect to authorization, you know, and improve the state of the art, you know, for the industry because it's just so important. You know, today, it's still, I would say, you know, kind of nowhere close to where it should be in terms of being turnkey for developers to be able to go build something that's undifferentiated heavy lifting, you know, like, but yet everybody has to go build their own. So I'm very happy that there's so much attention to the space now from large companies like Amazon and small companies alike. That's very great to be part of that ecosystem. And, you know, we're looking forward to, you know, helping folks move forward. And of course, if you're interested in what you've heard, you know, come look us up. We have a community Slack. We're very friendly there. So if you go to Scerdo.com slash Slack, that's where you can find us and how you can do it. It's another thing for other individuals who are white out. It's a good question to ask, I'm going to read the white out. The question was, and the House Intelligence Committee, and the House 7 sub-ity is a 4 related to the Russians. Today, 4 related to the international issue of unmasked, it's the issue of unmasked or the unveiling of intelligence of Americans in intelligence reports. Now those sub-ity are a 4 CIA duration, We're going to look to try to get it clear again. And the House Intelligence Committee issued seven subpoenas today. Four related to the roughness today. And three related to pro-issue of unmasked, more of the unveiling of identities, more of the unveiling and intelligence of the mix. Now those subpoenas were issued to former CIA Director John. We're going to look to try to get clear answers. And the House Intelligence Committee and the House Intelligence Committee and the House Intelligence Committee and the House Intelligence Committee