 Good afternoon, everybody. And since we've been recorded, good time of the day to all the online viewers. My name is Andre. And today I'm presenting with my teammate Greg. We are a part of the JPC security team. And we both work for Google. Definitely a bunch of really nice presentations today before hours. So it's tough to speak, especially. Definitely, I think the presentations with dinosaurs was really, really amazing. So today it's just an opportunity for us to present the latest progress to you, to the JPC community. And definitely thank you all for submitting GitHub issues and a separate shout out for reporting vulnerabilities. It's really important for us. And this is how we progress. Also today I noticed that security itself, I think it was mentioned in pretty much all the decks with different flavors. So it's definitely a very important topic nowadays. Also very excited that there are even international participants in the conference. And before we start, just a few words about the format. So again, since we are presenting together, we have two sections. We will try to split it like 10 minutes. Since we are a little bit late, I see people are still coming and leave like five minutes for questions at the end or both of us. Simultaneously, like Kevin and Abishek mentioned, we are around during the day, so feel free to reach out to us when we are in the lobby. We'll be happy to chat with you and answer any questions you might have. OK, so today we'll be talking about advanced functionality of JPC security. What's new there? So first of all, about the ideology of the team. What we are trying to achieve, what our design principles are. First one is secure by default. And second is advanced configuration and extensibility. So again, if I try to piggyback a dinosaur presentation, the first one is just buying a bag of dinosaurs. So you grab something out of the box and you want to play with it. You want to test it. You want to check if things are working as expected. And it should have some meaningful default features. That's what we mean by secure by default. Talking about advanced configuration, while I was checking different issues on GitHub and questions on JPCIO, there are all different use cases where JPC is used. And one really caught my attention. A person was asking about JPC running on GoPro camera. So definitely we cannot cover all the potential use cases like that. But the idea is that we provide flexibility for advanced users to implement the functionality they needed. So we're still adding features to Advanced API, which is experimental across languages. And really experimental here means it is already quite stable, but we still want to add a few more features before officially stabilizing it. And the latest features we're going to discuss today. They're like audit logging and the CRL. Once we add all the features and the Advanced API stabilizes, we make it public and stable. And again, definitely we are welcoming all the community feedback if you have some features in your mind. And our current interfaces do not allow to implement that. Please reach out to us. Please raise issues on GitHub. We definitely cannot promise that we'll be able to work on all of that. Like it was mentioned before, there's always competition for resources. So upvoting also makes no sense. It helps us to understand that, yes, this feature, this request is needed by community. So talking about audit logging, why we are doing that. A few words about modern security environment. And again, environment is designing things. First of all, if you're listening to the speaker who was from VMware talking about zero trust security and zero trust architecture, then you are familiar with this stuff. But anyway, I'll just do some kind of refresher. So zero trust architecture is a term by NIST. There is a publication. You can easily Google that. So literally the idea is that it's a set of cybersecurity paradigms that smooth defenses from natural based, from static based things to focus on users and resources. So access to resources is determined by dynamic policies and not by protected perimeter and stuff like that. The other idea there is trust is never granted implicitly. And we must be continually evaluating what the authenticated user is trying to access. Do. In, again, in a lot of decks today, it was two features were mentioned, MTLS and OWZ, talking about AAA. So what exactly AAA is, it's authentication authorization and accounting. Literally it's who is making the request. If that person is a workload is allowed to do this kind of action. And the last A is tracking. So even if it's a successful event and that person is allowed to do things, we still want to check this thing. So audit logging is the third A. So right now, after the 157 release, JRPC has all the AAAs. And this is a really nice feature because now we are covering all the A's. Talking about use cases for audit logging, again, it's up to you what features you need. But we can think about the following potential use cases. For example, if you already have a policy and dynamically it was changed, it's a very good idea if audit team requires that to list some traits. Literally. So for example, you have a policy with few roles and another policy became active with different set of roles by enabling audit logging. You can leave a trace in a different standard outlogger or other parts of the system. And it can be presented to audit team that, hey, definitely a new policy is active right now. Simultaneously, there might be admin users to execute the privilege requests. I didn't know for example, imagine the situation when a person works for a booking system and they can cancel the bookings. It is an action, but leaving such trace, it is also very helpful. Outside of action, imagine the situation when you have a policy and everything is going fine, but you started to see a lot of denials. Definitely, it's a sign that something might be wrong and it might be, again, a notification for further investigation. And another potential use case kind of enforcement of use it or lose it type of things. Imagine you have a policy with a bunch of roles. You keep monitoring the policy and you realize that, hey, the following paths, the following roles were not active for the last month or three months or whatever you might have in your work. That's, again, another sign that it's a potentially good idea to revelate such roles and policies. Quick refresher about JRPC authorization. Again, here we are talking about Proxeles. So this engine is built-in in JRPC. Built-in engine, definitely very fast. What you currently see on the screen, it's a kind of dummy file because real format is JSON or XDS, and I omitted some of the things for simplicity, but here we are talking about dummy book service. So it's read information about books and built-in information. And this file, actually, it protects built-in action. You need here a special role, special speed ID, just to be able to execute this delete action. Talking about audit logging, high-level design overview. So first of all, it's extension for authorization policy. It's a new section under the same file. It happens. It's getting executed after authorization decision is already made. And JRPC team, we provide some out-of-the-box loggers. Right now, there is a standard out-logger implemented and already available. So it's a very simple one. What it does, it prints the login statements, which I cover later, to standard output. There are more examples of that under testing section, so I encourage you to take a look. It shows you how to implement things and how to register the things and definitely how to test them. If you want to do that, again, as a power user, you can implement pretty much every logic and definitely beware. Because, for example, if you implement this interface and you start getting authorization decisions, nothing can stop you from starting storing this data in remote databases. We, as JRPC team, we do not have any control. And imagine the performance in that situation. You got JRPC server, very performant. And simultaneously during the flow, you try to store information in external remote database. Even pink might be like a second or whatsoever. So definitely it's a very powerful mechanism and just be careful with that. What metadata you got, let me just go to the next slide. So what you see here is metadata available for you. What is available is definitely full method name. Again, here I'm covering our dummy bookstore service. Also have a principle, so who made the request. Policy name, it might be, for example, a file. Here we have my bookstore file. Match to rule, so which exact rule the request matched and authorization decision. So, for example, in this scenario, it is false because a user, a regular user, cannot delete a book. Now, the last slide, how to do a very simple hello world, how to enable the things. Again, I'm just amitting a good chance of a policy file, but you see at the end there is a new section named audit logging options. And under that, we have audit condition on deny. The thing to keep in mind that deny here is actually outcome. So it's outcome based. Both allow rules and both deny rules, they can produce denial, for example. And the last few lines about the audit logger, you see there is a name, standard out logger. It's, again, pre-built logger, which is available. And definitely it's already registered. So if you just add these few lines of codes, you can, again, run some requests. And this is a simple outcome you might see. Again, there is another JSON wrapper around that, but the metadata, this is a good sample of metadata you are going to see. Again, this is a built-in logger. If you would like to implement something on your own, this would be very, very similar. And again, I encourage you to take a look at the testing examples. And you will even see how it's possible to combine multiple loggers in one. Thank you, Greg. Everybody, my name's Gregory. And I'm here to just talk a little bit about what we've been doing with the certificate revocation lists. So to kind of discuss CRLs, first we have to all get on the same page about public key infrastructure or PKI. This is a very deep topic, and I'm just going to really scratch the surface. But I just want to make sure we all are kind of on the same page. So to use an example that a lot of us probably do every day, you go to your favorite website on your browser, google.com. And then Google will present you a certificate. And your browser knows how to take that certificate and verify that this google.com is actually truly the google.com that you want to connect to, and not somebody spoofing that information. That certificate comes from somebody called the certificate authority that they're responsible for issuing certificates. And your browser trusts them, Google trusts them, they're trusted, they're a trusted other party. So that certificate authority can also issue certificate revocation lists saying, hey, I did issue this certificate, but now you don't want to use it anymore, it's not good. Variety of reasons may be a private key leak. It's just a good thing to be able to do. And RSC 5280 defines the X509 standard for certificates and certificate revocation lists if you really want to dive deep into that standard for internet BKI. Thinking more about GRPC, it's not like a human on a browser connecting to a website, but more service to services. A lot of times wanting to do mutual authentication with each other. And in this case, your organization, whoever you're running GRPC for, probably has a team that's in charge of and is your certificate authority. They're creating certificates, certificate revocation lists, distributing them all over your fleet. And often, they're kind of on the disk of the servers that maybe your microservices are running on. So that's what CRLs are. So hopefully you understand why we want to be able to support them in GRPC. So the current support is very tightly coupled to OpenSSL. And it's not the best. So it uses this function called X509 lookapaster. And this sets a very specific requirement on a directory and how the files in that directory are formatted to be able to read CRLs. So that means that now your PKI has to be dependent on something that GRPC is doing, which we don't like. We don't want to force you to do something with your PKI. And there's zero flexibility for you to do other things. Another issue with it, because it's kind of based in OpenSSL, languages like GRPC Go that implement their own stack, they don't use OpenSSL, we've had to re-implement this behavior. And that just leads to tiny inconsistencies in security. And that's also just kind of annoying for everybody that's trying to use it. And this is kind of what it looks like. I hope that y'all can read that highlight. You just create your channel credentials options, and you can set a CRL directory. And then as you pass these channel credentials through to your channel, you create the channel, you make connections. And during the handshake, it will use the credentials in this directory. So what we're trying to do better is follow the design principles that Andre talked about. We want to make a CRL provider interface that is flexible and over-rideable and is easy to use, but also has the flexibility for you if you want different behaviors. It's also really nice. We already have credentials and credential providers, so it's a really nice semantic consistency in the API. If you see credential providers and you're implementing something with that, you probably also want to implement something with CRL providers. And on the third point that Andre talked about, we're trying to provide these kind of batteries included good default implementations that we hope will cover many use cases that are very common. So for example, there we have things like static, which is give us a CRL string and we'll make sure we use it. And then periodic reloading of directories. So it's very common to be distributing these things as a server runs and being able to have your GRPC server not have to restart to read in new CRL files that get distributed to your machine is definitely a pro. You don't want to have to restart your server whenever you distribute new credentials and CRLs. So I decided to use some pseudo-ish go code for this example. But this is what you can expect it to look like. It's not currently in the code base yet, but it should be quite soon. But it's very simple. All you have to do is override, get CRL, and you'll get a CRL associated with some certificate you're trying to verify. So on the back end of this interface, you can implement whatever you want to store and get CRLs and then just return them through this interface. And then it kind of cascades through the options the same way that the other option did, where you have your options. And then you'll set this provider. And now when this server goes to do a handshake and verify a certificate during a handshake, it'll go ask this provider, give me a CRL so that I can verify the certificate. So that's pretty much all I have on certificate revocation. And we're back for Q&A session. If you have any questions for any of this topic, we'll be happy to answer. Cool. Any questions, anyone? So the autosy policies that you have shown are specifically on the SPIFI. The autosy policy that you have shown, the principles are specifically on the SPIFI IDs, which are from the certificates as you. Is there a reason why we did not use regular username password? So regular users with username password types of authentication cannot go through the autosy policies? So that's a, oh, OK, thank you for the question. So it's a slightly different idea in general, because on the policy side, the user or the workload is already authenticated. So it actually doesn't really know the SPIFI or how exactly it will obtain the username password or anything like that. It just enables for you to create a set of rules. And you just know exactly your infrastructure, how user or workload is authenticated. Hi. I have a question on audit logging. So in the audit logs, we have a full method name, right? But what if our allow or deny decision based on, or can it be based on the internals of the RPC message? So it's, for example, if only some subset of books we can delete or not, like by book ID or some book category. So something that is not in the full method name. Yeah, that's a good question. So it is not about audit logging itself, but it's about how authorization engine itself works. So right now, as far as I remember, there are very different types of matchers. And yes, it is quite powerful. For example, you even can rely on different headers and so on and so on. So definitely encourage you to check the JFC for authorization engine itself. It is also open source and released. And you can find all the different details and mechanisms there. And again, since audit logger is an extension, it just takes the decision itself. Wonderful. Oh, one more question. Well, the CRL provider, I would like to ask, how often is the provider pulled? Because getting the CRL could be a time consuming procedure. So if it's pulled for every incoming request, does that mean there are some cash and mechanism required on the provider implementation, something like that? Right. So every time you're doing a handshake and you want to verify certificates, you're going to want to get CRL. So the idea is that, yeah, you'd have it stored in memory definitely on the provider. So when this function gets called, it can just return something very, very quickly in memory. Because it's kind of on the critical path of making a connection. It's during the handshake. So you would not want to make a network call out to fetch a CRL every time you called this. You'd want to have something that loaded it asynchronously and return it from memory. Yeah, so just to clarify, what you mean is that part of responsibility is for the CRL provider. It's not in service. Right. OK, thank you.