 Welcome to another episode of authorization in software where we dive deep into tooling, standards and best practices for software authorization. My name is Damian Schenkelmann and today I'm chatting about just-in-time authorization with Atul Tulsi Bagwale, CTO at SGL. Hey Atul, it's great to have you here. Yeah, thanks for having me. Great to be here. So can you give listeners a brief overview of your background and what you're currently working on? Sure. So, you know, my background goes a long way back in identity. I was one of the early engineers at what was then a very tiny company called Verisign. I was like the 13th engineer to join Verisign. That was the certificate authority that people used by default. It kind of grew into a very large company over the years. I was privileged to be able to observe the growth of the use of real cryptography on the internet through the use of SSL certificates. At that time, we are trying all kinds of different things like secure mime and stuff like that. So, you know, secure email and things like that. So, you know, it was great to see this all build up, you know, into what it is today. And then I started my own company, which was totally focused on identity. And this was a company called Trust GenX. And we defined the notion of a federation server. And so that's a model that pretty much everyone uses now. And so we got started way back when and we were acquired by HP at that time. It was one company at that time. It was in the different companies now that it is now. And then after that, you know, I took a break from identity. But then I, you know, after a few years, I was an architect at a company called Mobile Iron, where I was working on an access management product that only would allow you to access cloud services from devices that are managed. And then I was at Google working on the Beyond Card devices API. And as a part of that, you know, we got thinking about some of the problems we have in session management and all that. And you know, I wrote that blog post back in February of 2019 about Cape, continuous access evaluation protocol at the time it was called. Now it's continuous access evaluation profile. But yeah, so that blog post became quite popular. And, you know, there was a standards group working around that. And so I got involved in the standardization of that. And, you know, now I work at Signal as the CTO working on authorization. And also I'm a co-chair of the OpenID Foundation's shared signals and shared signals working group. And so there we are, you know, developing Cape and shared signals framework and risk standards. That's amazing. It's you've been doing identity and authentication, authorization, security stuff for so long. It's great to have you here and kind of like dive deep into your expertise about these topics. You mentioned Cape. You mentioned authorization at SGNL. I've done some research. I know SGNL is doing just in time authorization. Let's start there. What's just in time authorization? How would you define it in your own words? So just in time authorization is you can think about it as a continuous way of dynamically authorizing, you know, access. And what I mean by that is that, you know, you don't need to have in the ideal situation, you don't need to have any standing privileges, right? So you don't need to have a sort of a birthright to be able to access something. When you're accessing something that needs to have a reason why that access is justified. And that reason can come from something that you're working on, you know, your role in the company, you know, the particular tasks that you're doing and which is why you need access to that data. And that is what just in time authorization tries to bring to reality, right? So while, you know, you can, you know, in practical terms, you can continue to use RBAC or role based access control for many of the things which are relatively static and, you know, straightforward, but for some of the use cases where, you know, the access to data can be pretty damaging. If it is not done without reason, that is where you can use something like just an entire access management. So it seems this way of thinking about things says you don't have access to things unless there's a good reason for you to have it. And at the same time, that reason is being kind of like dynamically determined by a number of factors. It's not something that that seems to be kind of like static or set in time. Exactly. Good. And how you mentioned the notion of role based access control, and you might still use roles when you're doing kind of like, I would say relatively elementary things for basic things. What about attribute based access control and the notion of using policies? How do those relate to just-in-time authorization? How might someone that is familiar with those terms come to understand what just-in-time authorization means? Maybe they've been doing it already. Maybe they don't have the tooling or the wording for it, but it's kind of like up the rally. Yeah. So attribute based access control is interesting. And it's sort of a step in the direction you can say about in where we've taken, you know, just-in-time authorization, right? So where it started, you know, attribute based authorization is that you did not need to have a group or a role that is specific to every fine-grained resource, right? So you could just like, you know, take the example of maybe a cloud service provider, which let's say has, you know, some compute clusters and some, you know, storage buckets and stuff like that. And if you had to define a role that, you know, user A, you know, this particular compute instance has access to this role and then you have to define the membership of that role and all that, you'd end up with a large, very, very large number of roles, right? Because you have a multitude of resources and each resource having a different role would be quite unmanageable, right? And so the great thing about attribute based access control is that now you can define a tag, right, to the resource. And then if the user has the same tag, then you say that, oh, then this user has permission to access that particular resource, right? And so what that does is that it kind of changes the dynamic that, you know, you're just the nomenclature of the tags becomes the sort of the policy, right, that, okay, you know, this particular resource has, you know, tagged yellow for some reason. And then the user has the tag yellow. And so all you have to define is the various, you know, tags like this. And no matter how many resources you have, the number of things you have to manage are just these tags, right? And so it becomes much more manageable than role based access control in this fine grained universe, right? What Justin time authorization does is sort of takes that staticness of the tag and makes us into a sort of a more dynamic evaluation based on relationships, right? So for what happens is instead of saying that, you know, this particular resource may always be accessed by someone who has this particular tag, what you say is this user is trying to access this particular resource, let's find a reason why that makes sense, right? And it could be that the user is working on a particular case that has been properly approved and escalated. And as a result, that user needs to have access to the resource that relates to that particular case, right? And so then the user gets access. So I mean, I'm talking about a, when you talk about compute clusters, I know that you end up talking about this thing called privileged access management, where not every user is going to access a compute cluster, right? You're typically your typical business user is just going to access like a, you know, a business system, like, let's say, an application that gives information about the customers, let's say home address, right? And why should that user have access to that home address is something that you can determine using just 10 time access control. If you had to do the same thing with attribute based access control, you'd need a multitude of attributes like the number of attributes would explode is one way to do it. The other way to do it is to do some kind of custom computation in order to arrive at that conclusion that this user needs access to this particular customer data. But that custom computation then becomes quite unmanageable, right? So it becomes like policy as code and then, you know, it explodes in the number of things that you have to change if let's say you have a change in policy, right? So for example, let's say there was a particular country like Hong Kong, which, you know, used to be an independent, more independent and now is more closer to sort of China. And as a result, you may not want your employees in Hong Kong have access to data that may not belong to users in Hong Kong, right? Now that kind of policy change to affect through code through the hundreds of, you know, thousands of applications that you might have is actually very, you know, intensive, right? And that's the kind of thing that cannot wait. And so what just in time authorization is able to do is, you know, it's simple policy change and you can get that same effect by, you know, not having to do any coding. So it seems we still have the notion of policies. The data sources, the attributes are kind of like assessed differently to determine the decision in real time. The question, I guess, is when does something become real time or just in time? How do you get to kind of like that extreme from like full static? I have policies that are fixed. I have my data sources all the way to a very kind of like dynamic way of getting this information and making these decisions. Yeah. And that's a great question because that sort of goes to the heart of what we do at Signal is, you know, how do we get this dynamic information? About the user activity, right? So typically, these systems that are providing an access decision have to be sort of in line, right? So these systems are something that they work as you are actually doing as the user is performing their task, right? So they are, you know, very time sensitive. And so if you are at that time trying to do some complex computation and trying to reach out to different systems which may have different availability and different latency properties, then it becomes really difficult for you to get that real time information into that decision, right? So what the approach that Signal does is that it creates a highly performant in a graph database, right? And the graph captures the latest information about the users which is ingested from, you know, continuously ingested from your data sources. And these data sources are like your normal systems of business, right? So this is like your case management system, your CRM, your, you know, your HR system, your identity provider and your, you know, any other system like device management. So all these things together is contributing continuously contributing information to that graph. And because the graph data structure is so highly performant, that you automatically get, you know, the ability to answer these questions in a very, you know, short period of time, like, you know, several, you know, tens of milliseconds, so that from a user perspective, it's a real time, you know, decision. That makes sense. What this means is, again, in this case, you mentioned Signal, but there's a system that rather than kind of like receiving a particular query to make a decision and reaching out to any dependencies it might have in order to figure out if that access should be granted or not, what you're doing is instead feeding that information into the system as changes happen in other subsystems, like the user management system, like the ticketing system, and kind of like having that cached view so that you can very quickly reply, I guess, because again, there's there's a big thing about authorization and performance. Now, you mentioned a large number of systems and these are all pretty different systems. In many cases, they might be SAS or cloud services or on premise applications that are being run by different companies. Yeah. How do we get all of those systems to kind of like emit these changes or maybe events or signals and say, hey, these all are the same thing? And how do you get to understand them in a single way? Right. So I think there are two parts to that question, right? So first thing is, how do you understand these changes to be related, right? And that is the job of the, you know, the graph that you built for your organization, right? So you can say I'm going to, you know, octa is going to be my authoritative source for user information. And this particular attribute of the user, which may be email address or, you know, employee ID or something like that, is the join attribute for something like, you know, maybe your workday system, right, which is your HR system, right? And so you're also pulling information from the HR system and then correlating it with the information that you received from the identity provider. And those two together are joined using the graph, right? So the relationships are established using edges in the graph. And these might be two different nodes in the graph, right? And so when you're trying to do an evaluation of policy or basically just traversing the edges in the graph to make sure, you know, that these edges are permitted and the nodes, you're able to reach the node that, you know, is the user and is the asset that is being accessed, right? And so that's how service data relates to each other, right? That was the second part of your question. Now, the first part of your question is, you know, how do you actually, how do you actually get this data from all these different systems, right? So in an ideal world, you know, people would use standards of XKM or even KIP or RISC to actually deliver that data, you know, as it changes, right? But in the real world, you know, that has a lot of limitations. And so we actually ended up having to build many of these integrations ourselves, right? And then we have a pretty good sort of framework by which, you know, if you have a data source that we don't already support, that it's a relatively simple task for us to, you know, build that integration, you know, get it there, right? So, yeah, so that, that is sort of the part which is every company's work is to integrate with the data sources, the way they are supported. I wish there would be standards that would help us do this much better. But those standards are relatively few, like XKM is a good example. But they are not very common. Yeah. And I've seen, I've seen this before, we might have also done some of these things in Adatsito, Adokta where you try to push for the standard, you try to figure out what's the right way for everyone to be able to integrate technologies. But at the same time, that takes time. First of all, standard implementation after that standard adoption. And in the meantime, you still have kind of like use cases that you want to help customers solve, you want to help the industry be able to approach and you kind of like take those proprietary approaches in the meantime, while the industry catches up. You mentioned though, a number of standards, I think you mentioned, okay, continuous access evaluation profile, you mentioned risk, risk incident sharing and coordination. I've also heard about shared signals framework, SSF. Maybe we can start getting into what each of these are, where, what their state is. So maybe we can start with SSF. What is SSF? And how would someone start thinking about it and using it? Sure. So shared signals framework is a generic framework for implementing secure web hooks. So it's a general way in which you can request certain events we send to you. And you have the ability to create streams of these events. And you have the ability to specify which subjects need to be in those streams. Or you can control the stream in terms of being able to start, stop, pause the streams. And there's also a way to basically verify whether the stream is alive. So there's a kind of liveness check also built into the framework. So what this helps is it's a generic framework to you can say asynchronous, publish and subscribe. Or you can say, it's a generic framework to do web hooks. So each API that you define doesn't have to define its own sort of web hook kind of technology. They can just use shared signals framework. So it seems you can define, as you say, event types. You also mentioned a subject. In this case, a subject is more kind of like a topic, not necessarily a user or set of users. And you can say, hey, these are the events I'm willing to receive. I can kind of like subscribe to an event. And this kind of like goes to to a big thing in the industry, event streaming, right? And the notion of, hey, I get a continuous stream of this event of these changes that applications are going through. Now, once I define those schemas, I'm essentially advertising, I understand these things. How do I get to communicate with other entities that can talk these things? Is this where maybe like CAPE and RISC come in? How will those work? So yeah, I think it's all SSF at this point. What we are talking about the streaming part and the event types and all that is all shared signal framework. CAPE is basically you can think of it now as an application of shared signal framework to session security. And RISC is this application of the shared signal framework to account management or account security events. And so let's talk a little bit about what you talked about first, which is how does the receiver or someone who's willing to receive these events, how do they discover the transmitter? How do they negotiate what kinds of events they would like to get and what the transmitter is capable of, what the transmitter is willing to send to you? So all that is defined in the shared signals framework. So there is a way to discover where the transmitter exists. So there's a transmitter configuration metadata. You can then create a stream, you can request certain types of events. You will get back a sort of the supported events by the transmitter. You'll get back the events that you requested and then you'll get back the events that will be delivered to you. And so that transmitter may say, hey, I support these 10 different types of events, but for you, I'm only going to send you these two event types because the others, we don't have the relationship between the receiver and the transmitter in order for me to send these other events to you. So all that negotiation is a part of this protocol in shared signals framework. And I mean, thinking about this historically, CAPE and RISC both started as independent things. So CAPE started from a session security point of view that was sort of the blog that I wrote in Google. At that time, we were meeting as an informal group of about 30 companies, various places we met in Google a few times in Microsoft a few times and in other places a few times. But what we realized is that the underlying notion of the asynchronous publishing substrate is a similar notion that RISC uses. We had different aspects to it, like the richness of the subjects was different, the mechanisms that we were thinking of were different. But ultimately, we were trying to solve the same problem, which is how do you asynchronously subscribe to different types of events and how do you get notified when those events occur? So that is the thing that we put as a common thing from RISC and CAPE and made that into the shared signals framework. And then we have now RISC and CAPE as two different applications of the shared signals framework. So everything you asked about, which is the negotiation and all that is all a part just the shared signals framework. This is very interesting because it's one of those cases where again, you have RISC and CAPE as standards and they are actually kind of like being kind of generalized into a bigger thing, which is SSF. You also mentioned, so again, the transmitter is essentially advertising its capabilities through metadata. That's how a receiver might say, hey, I know that you exist. I want these events from you. There's a negotiation there in terms of how the parties trust each other. Who would be receivers in this case? Who would be transmitters? In the example you gave before, for example, SNNL and all of these other entities, we also talked about like, Octa, what role would each of these services play and why would they typically play that role? Yeah. So it depends upon the application, right? So if you just wanted a simple thing like, let's say I wanted the ability to dynamically do both sessions, right? If you wanted to do that, then typically an identity provider might become your transmitter and a service provider like a SaaS app or something could become your receiver, because at some point, the identity provider believes that there is a reason to terminate the session and it is declaring to the receiver that it has terminated the session and that is conveyed through a SSF event, right, a CAPE event, which is called sessionable. But there may be other situations where, you know, let's say a device management service realizes that the device that the user is using is falling out of compliance and it needs to convey that to anyone who is interested that, hey, this particular device is no longer compliant with the organizational policy and in that case, the device management service becomes the transmitter and the receiver is, you know, it could be an IDB, it could be an SP, it could be anybody, right? A third case, which is, you know, more close to format signal is that if we realize that, you know, the kind of authorization that the user has has to change, we can generate an event called, you know, token claims change, which is a CAPE event that we can convey to maybe a service provider that says, oh, you know, now this user should not have access to these resources or should have access to these resources. That makes sense. So what that means is a particular system might be receiving events, it might be sending events, and it's all about the type of event, and that will determine more of like whether they are getting them or emitting them and not necessarily a system being kind of like fixed into one particular type of role. This seems, and whenever I think about events, it's a very kind of like decoupled pattern. So this seems like something that could evolve over time and does not have a lot of dependencies between systems to evolve as a standard. From your perspective, what do you see as the challenges to standardizing some of these frameworks, some of these events so that they start, first of all, again, they become standards and then they start getting an option? So I think let's take the example that is working for a while now, which is risk. So Google has a risk service for several years now. They've publicly talked about hundreds of thousands of applications using risk today, and you may not even realize that some of the applications that you have already have risk in it because it's baked into the underlying technology that you use, like Firebase, implements the risk protocol. And so because of that kind of support, you now have a proliferation of adoption of risk, and it is ready for other risk transporters to be able to push events and for others to receive them. But CAPE is a relatively new standard, and we haven't yet seen that kind of adoption. Although what we've seen is that certain companies like Microsoft use CAPE events within their infrastructure, not necessarily make it available to outside parties, but within their infrastructure. And so I feel like you need a few things for this adoption to come around. So basically, you need a killer application, you need some critical problem that you're able to solve using that standard, and you need the urge for being interoperable. So you're not just trying to solve it for yourself, you're trying to solve it across software that comes from different organizations. And the thing about security is that unless you have interoperability, you don't get security. I mean, we all take SSL for granted. We take the encryption protocols that go into SSL, the handshake and everything, TLS, I should call it not SSL anymore. So into the transport layer security, regardless of what device I'm using, whether it's my webcam or whether it's my browser or whether it's my doorbell or whatever it might be, everything uses the same encryption. And because it is encrypted in the same way, it is interoperable and therefore it is secure. You don't have these questions anymore of whether, hey, can somebody snoop on the data between my webcam or my doorbell, computer or something like that. Because that problem has been solved using standards. I believe that that same kind of interoperable security is required in order to deliver what seems to be a very popular goal, which is zero trust, where you have software and components, independent services that are actually running, that on an enterprise as that belong to different vendors that come from different vendors, you need all of them to work together in order to make it secure. And that session security is sort of the core of zero trust, because the difference between the earlier model and the zero trust model is basically that dynamically you should be able to terminate sessions or modulate those sessions based on new understanding about the user. I really like that. I like how you kind of like took the TLS example as something that we take for granted. But if you've ever done a migration of TLS standards, you know, it's also a pain, right? I remember when we're going from 1.2 to 1.3, making sure that all of our customers were compatible and stuff. You mentioned zero trust, which is a big thing in the industry right now, kind of like a hype term, but at the same time, very important. Maybe for listeners that aren't familiar with what it is, how would you define it and why is zero trust something that a lot of companies are looking for? Sure. And this could make like an episode of itself. So I'll try to keep it short, right? So before zero trust, there was a model where you had a firewall and you sort of, once you're inside the firewall, you're kind of in this wall garden where everything is assumed to be secure, right? And I call that architecture, by the way, a fence architecture. What that means is firewall enterprise network computing environment, right? And so in this fence architecture, you assume that, hey, you know, once you're inside the firewall, you're great, right? It turned out that that is a very, very misleading and sort of a bad assumption to have because there have been numerous ways in which people have been able to compromise the firewalls. And as a result, you end up becoming this insider, as an attacker, you now have unlimited access and you can do lateral movements and you can basically compromise a lot of the security, right? Now, on the other hand, when people started moving to cloud services and people started moving to mobile computing and, you know, working from home and all that, that idea that you had to be inside a firewall to access these, of course, topologically also didn't make sense, right? Not just the security of it, but also like, oh, are you going to VPN into your enterprise network from your home in just in order to get to a cloud service, which was also again in the cloud, right, on the internet. And so it kind of topologically also did not make sense, right? So what the zero trust approach does is basically it allows any endpoint on the, any network endpoint on the internet to make its own trust decision. And it has to make that in real time, right? So when a user arrives at that endpoint, they need to be able to decide whether that user should or should not be able to get access to that endpoint, right? And that is today implemented using, you know, things like federated identity, which creates a session and then sort of, you know, it's like a fire and forget model where now the SaaS provider that you're logged into now has to deal with the security themselves, right? But if, you know, all these silos that are created where you have the identity provider, you have the SaaS provider, you have the device management service, you have like a, you know, endpoint security service, you may have many other services that are involved that all know something about the user, if they're not able to communicate, you're getting a worse security outcome. And yeah, that that is the zero trust security architecture and the problem that we're trying to solve using standards. Great. Thanks for doing such a good job at explaining those concepts. We're talking about the standards and one thing I don't think I asked about is how might someone start learning about these and implementing these? You mentioned, for example, if you're using Firebase, you get risk. If I want to learn about risk, if I want to start implementing what libraries are there, the same thing about Cape, right? Like how might the team or a developer start learning about it and understanding how it works beyond just going ahead and of course, reading the drafts. Yeah. So for risk, of course, the great thing is you may not even have to learn about it. You just use some of the toolkits and then you get it, right? Automatically. So you get the end result that you're looking for and not necessarily have to worry about. Like, I download a browser and start using it. I don't care that it's actually implementing TLS, right? It's just as you, right? Cape is not there yet. And so to to encourage adoption of Cape, what we've done that signal is we've launched this website called Cape.dev. And if you go there, you, you are able to learn a little bit about the standard itself. There's some learning content. But the the main thing that the site does is it gives you a transmitter that you can test your receiver implementations with, right? So it's not exactly meant for end users who are just wanting to get the result of session security. It's for developers who are implementing Cape and who want to now get events to their receivers in order to test their receivers, right? And so that's one place. Cisco has this great website called SharedSignals.Guide that helps you understand the SharedSignals framework and also has some open source components that you can use in your implementations for Cape and REST, right? And lastly, you know, you can always go to the openid.net slash www.sharedsignals which is the homepage of the SharedSignals working group. And you'll see all the specifications there. You might even see some help content and some, you know, blog posts that may be information, right? So those are the three different ways you can learn about Cape. Excellent. And we'll make sure to to add the links to all of those in the show notes. You mentioned SGNL and kind of like how you're trying to to make educate folks about Cape. One of the things that we were talking about earlier was there are still these policies or there might still be policies, but SGNL, at least in my mind, and maybe I'm wrong about this, it kind of like becomes a very important policy information point where like a big part of your policy decision is going to take place there. Maybe you can share some examples of how existing companies that might already have a policy based architecture for authorization start using SGNL and that might give folks kind of like an idea of how that might work, but also how a startup or a company that maybe does not have that infrastructure in place might take advantage of SGNL without needing to kind of like adapt to those existing policies. I think that could be interesting for people is saying. Great. Yeah. Thanks. So let's step back a second and clarify these terms. What is a PIP and a PDP and a PEP and you know, all those things, right? So these these terms date back to the what was, you know, XACMO. I forget what the acronym stands for, but it's an authorization policy language based on XMO and it's not used that much anymore. Of course, there are some companies still that use it, but what the terminology that came out of that was very interesting and that is now used quite often, which is there is the first of all, you have the policy enforcement point, right? So which is typically a part of your application. This is the PEP as they call it, right? And the policy enforcement point is what decides whether or not a particular access is allowed or not, right? Now the policy enforcement point may rely on a policy decision point in order to make a decision that it enforces, right? And so the policy decision point is signal is like a policy decision point in many applications, which do like role based access control or which do some sort of attribute simpler attribute based access control. The policy enforcement point and the policy decision point are both rolled up into the application itself, right? One of the things that an externalized authorization management software does is it separates the policy enforcement point from the policy decision point, right? And you can think of signal as a centralized policy decision point. Now signal itself may use different data sources in order to get to that decision and those data sources are the policy information points, right? So where, you know, that is the information that is used in order to make that authorization decision, right? So PEP is the enforcement, which is inside the application. PDP is the decision point, which is, you know, you can think of that as signal or a centralized policy service. And then PIP are the information points which provide the information that is used to make the decision. And in the signal environment, those are the data sources that we rely on, right? And so to get to your question, which is how do companies in that maybe already have some existing policy framework or some access management system, how do they use, how can they use signal, right? And so I think the, you know, the simple answer is that you can think of signal as a layer over your existing policy management system. So let's say you're doing some RBAC kind of a decision within your application, and that might be fine for many of the use cases that you have. Like for example, if I'm just trying to log into the application, I don't need to know anything more than just the whole of the user, right? And that is fine. But now if I'm trying to access a particular customer's data, right, you may then invoke the signal service to say, hey, does this user have a reason why that, why they should have access to this particular customer's data, right? And that is how you can complement your existing policy implementations with signal, right? Now, sometimes your role management can get very complicated, right? Because you're defining all these different fine grained roles in order to achieve policy, right? So you may say, oh, this user has access to VIP customers because, you know, they belong to this particular role, right? And then once they are logged in, they get access to all the VIP customers, right? Now, with signal, those that role complexity actually might reduce because you don't need these complicated roles. You just need some basic roles. And then the rest of the things you can do using just into access management. That makes sense. And do development teams end up integrating with signal through APIs, through SDKs? Do you have SDKs for programming languages for specific frameworks? How do you approach that? Yeah, so we definitely have SDKs for a lot of languages. And we also have ready integrations with certain SaaS providers like Salesforce. So if you're using those, then you don't need to actually use the SDK itself. But yeah, I think we support a number of languages at this point. We are working on making it easier for applications to, you know, transparently users using like proxies and stuff. But that stuff is not out yet. Cool. And finally, I'm a person that has worked a lot with distributed systems and scaling and performance. It seems that once you're using a service like SGNL to make these authorization decisions or at least to help you make these organizations decisions for you, it becomes very important, right? You are going through it every time you need to make a decision, which means that for every action a user needs to make, you need to go through it, it needs to be up, it needs to be fast. How do you folks approach reliability and scaling at SGNL in terms of providing a service that just works and also hopefully feels seamless to use and ask if it was running in your cloud, right? Yes. So the key to scaling and performance and availability is simplicity, right? And if you're doing a lot of complicated computation at the time you're trying to make that decision, you will end up in a situation where something is going to fail or going to take more time, right? What SGNL does is basically because of the rich nature of the graph that we rely on, that computation becomes really simple, right? And as a result, the decisioning is extremely fast and it's reliable because you're not trying to do something very complicated. All you're doing is traversing a few graph nodes and as a result, you're getting to that. The complexity in SGNL sort of goes into the integration of the data sources, the distilling of the policies that you write into sort of things that restrict the queries on the graph and stuff like that, right? So that is something that I guess we haven't spoken about much, but I would like to just mention is that the richness of the graph also bends itself very easily to defining policies that are more readable to read. Because you cannot really express a policy that has to be work in terms of tuples, like A, B, C kind of are related. But if you have a graph and you just say that, hey, user needs to be assigned to a case that belongs to the customer in order to access the customer, that is actually a graph relationship, right? And so the policies that you can write using SGNL because of the graph database are much more readable as a result of that, right? And so going back to your sort of reliability question, you try to keep things simple in the evaluation path and that is how you get the reliability and availability, right? And obviously it helps that many of our engineers have a heritage in some of the largest companies, many of us came from Google and we've got sort of the background that has helped us implement these highly performant and available systems. Yeah, I just realized that you've been talking about graphs a lot in this conversation. We also talked a lot about policies. We've done a couple of episodes about some of these things and how they fit into authorization with folks from OPA, folks from Mercado Libre, again some of the folks from Airbnb who are, again, also implement the system from Google based on graph for authorization. We worked on some of those. It's very interesting how some of these concepts are kind of like becoming more mainstream now that authorization is also becoming more mainstream. Yep, yep, absolutely. Finally, what deployment options does SGNL offer to customers? Where does it run? I think that might be interesting for some folks to hear about particularly given the nature of the service. Yeah, and you cannot have a system today that just works in the cloud or just works on-prem. You have to offer both these options and so we do offer both of them. If you want to use it as a SaaS cloud-based service, you can do that. We do all the heavy lifting for you, but if for some reason some of our customers are concerned about the geo-regulatory boundaries and as a result, we have also deployed SGNL on-prem and that's also an option that our customers have. This is sort of the new world where the data regions because of these jurisdictional issues cannot really have a monolithic cloud-based service anymore. You have to design it in a way that works across all these different regions as isolation and then that becomes very easy to sort of just do an on-prem version of that. Yeah, that was our experience at Otsido as well while we did the on-prem stuff, we did the hosting your own cloud, we did the hosting our cloud with dedicated instance, we did the shared public cloud and that optionality was a big advantage for us over the years. That makes sense. From our side, thanks a lot, Atul. I want to ask you one final question which is, is there anything else that you would like to tell listeners? Maybe it's a call to action or something that they might do to help you? What would that be? Yeah, I mean, I can put in the pitch for signal but I'll use that one question to say, please adopt standards in order to get secure outcomes. I like that call to action, I will adhere to it. Thanks a lot. It was great to have you here. I learned a ton of things and hopefully listeners will too once the episode comes out. Thank you. All right. Great talking to you. Thank you.