 Welcome. Yeah, still waking up? All right, there's the guy that's awake, right? Anybody else? Oh yeah, woo, I'm awake. Have a nice day. All right, anyways, we've got, uh, I'm, I'm gonna try and say these names right. Narayan, Lakshmi, and Divya. Uh, they are from, uh, uh, a large application security team. Uh, they're gonna be talking about, um, authentication, doing it right, doing it wrong, right? The name, the title of the talk is, who dis, who dis? The right way to authenticate. And, uh, without further ado, here you go. Hey, good morning everybody and thank you for making it on a Sunday morning. So, welcome to our talk, who is the right way to authenticate? Who are we? Thank you for the introduction, David. Uh, I'm Lakshmi Sidhir. She is Divya and he's Narayan. We do all things application security. We try to be involved in the all stages of, uh, SDLC to make sure it's secure. And we work with, and for developers as security engineers, we consider that our primary responsibility. So we work with them to make sure we have, uh, secure products that are being shipped out there and we help them with all the security guidelines around whatever they're building. So, before we go into the talk itself, just to note that everything we've used for our research is all publicly available data. We haven't used anything that's for a company or anything else as such. Also, one more thing is, we want to make this clear out there that we are not encouraging the use of anything that is inherently secure, uh, sorry, insecure. And, uh, this is just an attempt that, be it that if you have to use based on the business case, uh, relatively insecure part of a protocol itself, then how do you go about making it secure? So, that is what we are trying to address in this talk. It's not to encourage any inherent insecure, uh, component of any technology or a protocol. Let's dive right into the talk now. Okay. So the agenda for today is we're going to go with a background of like what is authentication and then address like why is this a problem? Of course, that's what leads us to the problem statement. We're going to get more in details about a bunch of protocols and schemes with authentication. Then we're going to drive out a conclusion around, hey, why is this still a problem and what are the patterns we observed across different authorization schemes? Then we want to have some time for like Q and A towards the end as well. So what is authentication? What comes into mind as soon as I say authentication? It could be Kerberos. I think passwords might be the top one. ABI keys, OR, SAML, and a million of them. Also, something that is closely coupled with authentication is identity. What is identity? Identity is who you are. And authentication is what verifies your identity. So authentication is what verifies who you are. Authentication is what actually verifies that I'm Lakshmi Shazdavya and he's Narayan. Now, how do you authenticate? In general, we use passwords, which is pretty common when we log into any application. Or you might have used your pen to go draw some money to gamble a little bit when you're in the conference. Who doesn't want some fun? And that could be something that you know, a secret that you know, that you use to identify yourself to an application saying, hey, it's me who's logging in. Then something else could be UB keys or some device that you own, which is the possession aspect of it. And the other part is where you might have woken up and be like, hey, let me look at my phone and that's your face ID. You might be using a touch ID. And that is what you inherently are. So authentication boils down to three things. Something you know, that's knowledge. Something you have, that's possession. And something you inherently are. That's inheritance. Now, where does authentication come into play? It might be user authentication broadly and service authentication. With user authentication, I was talking about you logging into an application, drawing out some money and other things that you interact with the system. For example, when we were trying to do this research, we wanted to pull up some of the CVs or some of the bugs that were associated to a bunch of authentication protocols. How did we do that? We logged into an application and this application spoke to multiple applications in the background. And those applications reached out to their own data repositories, beta database, beta nestry bucket, and they pulled out all that information, brought it together, provided to the requesting service, and this service fed it to a BI tool to make sure we get this readable and ingestible data. And in each of these interactions, each of the servers are talking to another entity. Each of these entities had to authenticate saying, hey, this is me who's requesting and I do have access to this. So when each of these entities were interacting, they had to verify their identity. So in this ecosystem, authentication is mostly a backbone to everything we do today. Now, in this complex ecosystem, we are using authentication. Do you think there's a problem? Of course, yes. And that is why we are on a Sunday morning talking about authentication right here. Now, why is there a problem? That's let's look into that more. But if you're a skeptic, we have some data for you. In our stock term, the latest revision also puts broken authentication in the second place right after code injection. This shouldn't be the case in 2019 is what we thought. Then we pulled up more data. And we got this Hacker 1 report of how much companies are spending on bounties based on the classification of the kind of bugs that are being submitted. And this was surprising to us. I mean, cross-site scripting is expected. Unfortunately, we have that. That's probably for another talk. But broken authentication was still the second most common thing that companies spent a lot of their money on by giving out bounties. And this is where we thought we have we didn't want to remember passwords. We wanted to keep it complex. We didn't want one pass. I mean, we didn't want to remember passwords for each application, which would make it harder to make it complex. That's how we came up with single sign on. So there are solutions based on issues we have faced before. But why is this still the second most paid out bounty? So that's this was mostly the inception for our talk. We were like, this requires some research. What are we doing wrong? What is going wrong with us? So this was mostly where we were like, hey, let's go dig into some of the protocols and see how we can go about this. Now, of course, we looked in deeper. There's been some leakage of tokens. There have been tokens that have been leaked because of some redirect URIs and a bunch of other reports that we actually dived into. They were tokens which weren't expiring at all as well. So we thought we'll come up with an approach and figure out how do we approach this as a holistic problem? So we identified how we went about our approaches. Initially, we identified we went through like the top 200 applications and we were like, hey, let's identify the most commonly used authentication protocols. Then we identified the high risk issues and we thought let's attack them. Of course, after the research, our idea was to come up with like a remediation or like something that could be pragmatic and useful. And the end goal always like I said, as security engineers, I feel we hold our responsibility towards helping developers build things securely, implement all those existing protocols securely. So that's how we approach the problem. For today's focus, we're going to talk about, we're going to start off with JSON web tokens. Then we're going to dig a little deeper into OAuth. Then of course, the passwordless solutions, that's magic links. And then we're going to talk a little bit about SAML as well. These are the four things we chose to go a little more in depth for the stock. Let's start with JSON web tokens, JARTS or JWTs. What is a JSON web token? A token is nothing but an identifier. With JSON web tokens, it is an encoded JSON blob that has some key value pairs, which is being digitally signed and provided as a token by an authorization server. I know I said a lot of words. So this is where it could be applicable. Let's say as a user, I log into an application using Facebook, Google or any of the social art. There is an authentication server which validates that, hey, this is a username password and then it provides a token, a JWT in this case, a JART. Now as a user, my identity was verified and this token is given to me as a unique identifier. And now when I want to log into, let's say, Khora as such, I pass this token itself. It validates based on the signature. You could use both symmetric and asymmetric algorithms. So let's say my application has a public key of the authentication server. It validates this token and then if it's validated, saying it wasn't tampered over transit, it actually services this user to, like, probably they're requesting to access data. So they just, the user is serviced. Now why do developers use JARTS in comparison to the session tokens itself, the session IDs which have a unique random number that was being used? So with JARTS, you achieve authentication because we are relying on the integrity of the token itself and also you could achieve authorization by adding some of the scopes. One more most important key difference is that with a session identifier, you'd have to store these unique random strings along with the user idea or maintain some kind of state every time a request is made. So with JARTS, it's out there. You just service the token only when the token comes in, you validate it based on the digital, you validate the digital signature and you go about servicing it, which makes it stateless. So this is one of the key reasons that most implementations have chosen JARTS. Again, it's compact and lightweight. This is a little debatable because it depends on how much information you're putting into the JSON body. And other one is it's scalable compared to session tokens because you're not maintaining like all of these unique session IDs and going about like servicing as and when they come. So it's pretty scalable. Again, coming back to the point of this is a JSON body, you have key and values. So in the in situations like OAuth and stuff, you could use this to even exchange information across third parties. So, yes, where is it popularly being used? It's being used in OpenID Connect as identity token. It's also being used in OAuth as access tokens and refresh tokens. And JARTS are also being used as session objects. We are going to talk about this being a tricky place to be in a couple of slides. Now that we spoke about JARTS itself, let's look in the structure of this JWT. You have on the left side of this is where you see the header payload and signature. This is nothing but an encoded base 64 URL encoded blob. This is what is being passed on to the server. And in the header, when you decode this, so remember that there's no confidentiality since anyone can decode who gets this token. It's just based off integrity. That is validating the digital signature. Now in the header, you see that there is the type of algorithm that was used to create this digital signature. And then you have the type. We're talking about JARTS, so that's probably, it's just going to be JARTS. In the payload is where you can have all these key value pairs. So that could be like the name of the user, what does the user have access to, or any kind of data. This is a section you mostly use exchange information. Between third parties or also add like assertions and stuff. So these are called claims for the most part. And then you have the signature which is basically hashed and digitally signed such that when an application gets the token, it can validate and make sure that this was encamped with. So we have the header, we have the payload and we have the signature for a JART. Let's move on. How many of you here show fans think this is a valid JART? Okay. Okay. You guys are right. This is unfortunately a valid JART. One of the things I think you notice is we have the header, we have the payload but the algorithm says none. This is being used in some edge use cases and allowed by the GWT RFC itself. So when the algorithm is none, if you do not validate or handle the library that is actually validating this token, it looks at this algorithm, the algorithm says none. It's supported by the RFC itself and then it just disregards the signature and goes ahead and services this, validates the token, says it's valid and goes ahead with servicing the request. Now with this as an attacker, I can go ahead create any token and every token would be valid. So I can get any actions, I can make any request and get all the information out there. This has been there since 2015 and unfortunately, I mean, again, the JART RFC has been there from about 2014 I guess but this has been a problem that is known and still prevalent out there. Now how do you go about like remediating this problem? One of the things of course, the best case scenario with programmatic control could be that you might be using a particular algorithm, you could hard-coded while validating saying hey this is algorithm that you need to validate every token against. Now this is the best case scenario, may not apply in most business cases. That's where comes up the second one where you could whitelist the bunch of algorithms you're using and validate against it saying hey the algorithm should match one of these strings and this is how we service it. So this is how we validate it. So this is one of the approaches you could take and of course verifying that the algorithm feel is not none is also a reasonable solution here. What something else you could do is I think with any software component anything you're building choosing the right library holds a huge lot of prominence. JWT.io has more detailed information so based on your tech stack and what you're using to validate your jobs itself you could go ahead make a choice understand what are the inherent risks and if it's supposed to none algorithm you would want to implement that branch where you're like hey this is something I need to validate. We are doing this well we are validating all our tokens for none and then we are also making sure we're using the right library. What else could go wrong? Let's say we have a user Joe here and he has access to a mail server. It's valid he uses an authentication server he gives us username password and it validates it presents back the token the token is valid. Joe puts sends this token to a mail server which is the client application it validates the signature it's perfect because Joe hasn't done anything malicious so far it's it's the right token. Now what if Joe presents this to another service which is a finance service maybe he wants to increase his payroll get more money. So even this finance service uses the same authentication server but Joe doesn't have access to go to this finance ops. So in this case if the client application actually doesn't verify that Joe should not have access just validating the token was not good enough but this client application should also make sure that the user who is actually requesting access does have like access to finance ops in this case. What is the mitigation for this? One of the things you could use alongside Jot itself is a predefined claim called audience where you could put it in the JSON body and say that like you can see here this was a cool company so you can say that this user has access only to the user database that way when you are a client application that is servicing that is validating a Jot you could look for this parameter and be like hey this was the issuer I trust the issuer but this is the audience so I am not the audience and I cannot really I shouldn't be providing access to Joe in this case. Now we validated we made sure that the algorithm is right we made sure we chose the right library we are also taking care of that user substitution or replay attacks as well what else is something we should be worried about. If Jots are being used as session objects you remember I mentioned a couple of slides earlier then how do you go about logging out? Now to log out itself it's a harder problem because Jots even though they may have a shorter TTL I mean again shorter TTL is debatable but let's say you have it for two hours and as a user I log in I log out the token is right out there being valid because we don't have a backend part of it we just have its expiry on it but logging out scenarios need to be handled very carefully something you could use you do for this is have a JTI claim which is Java the JWT token identifier itself which is a unique number and maintain this unique number with on your backend and have like hey this is a user that way once a user log out logs out you could revoke it. The choice was made that it was stateless but it's no longer stateless anymore because you need to have this revocation list it's not enough if just your authorization server or the authentication server has this revocation list but in case of like asymmetric encryption or actually anything you use for digitally signing it each of these client applications need to also maintain or subscribe to this revocation list itself otherwise while validating the token I mean the token would still be valid because it hasn't expired so revocation has to be handled while it maintains a list looks for the JTI that is a unique identifier goes through the list and make sure that that wasn't revoked and then goes about like servicing the request so putting all of this together some of the key takeaways could be that verify I couldn't say this enough validate your algorithm claim make sure to verify that this is exactly what you use the second one is of course use cryptographically strong algorithms the third one is around like hey utilize all the applicable claims that are already provided in the JOT RFC itself because why not why when the protocol does the heavy lifting for you why not let it do its job so make sure to validate that use those and I couldn't say this enough that unfortunately even today revocation is a very common problem we see so please handle revocation carefully make sure that any time this is used as a session object think of revocation automatically so this is about G son web tokens now Divya is going to take up a lot and talk more about that thank you okay still with us alright the next topic is open authentication or open authorization however you want to call it so in case you are unfamiliar or if you want a quick primer all deals with mainly three entities where you have the client the resource owner and the authorization server the client now need wants to access some resource and that is provided by the resource owner and the resource owner needs to authenticate that it's the right client that is accessing the resource so you have another party which is the authorization server which like before this traffic flow happens knows who the client is and can provide that verification information to the resource owner and the way all these happen is through a bunch of calls but the main thing that gets exchanged between all these three entities is a bunch of tokens and there are three kinds of tokens there is access tokens refresh tokens and authorization code which I'm still going to group it under the token so what happens if any of them and especially a client or a web browser in on behalf of a client or even a separate service who's acting behalf of a client is going to be in position of a token that means they have access to a particular resource that they are trying to access and this particularly works perfectly when every system follows the best practice possible and I'm pretty sure all of us are familiar on how likely that is so I was pretty surprised to see so many words maybe lost stolen intercepted proxied replayed and altered I thought this was excessive but then I was able to find multiple cases of the tokens being like lost to stolen or intercepted and this is still not been corrected and one mitigation that is provided for this for handling tokens properly is to pass it over TLS and have a shorter TTL and I distinctly remember one person saying to me that the TTL cannot be forever so anything shorter than that is going to be a shorter TTL right so it should be okay and I've had one person tell me I'm going to keep it at five minutes so it should be okay for another person short is half an hour for another person it's six months for another service it's going to be one year so all of this is contextual and even if it is the shortest case possible like are we okay with that open attack window not really right so what if that's not enough what if we want to protect the tokens in all situations token binding this is not this is a new concept and there have been varying opinions on that so in this presentation I'm just going to open up like there is a concept of token binding that is available and also a kind of stopgap measure that could be applicable in case the full-fledged token binding is not available so what exactly is token binding so in a normal OAuth workflow you have just a token that is being passed between all the entities in case of token binding there is some contextual metadata that is attached to it so in this case there is a pre-negotiation workflow that happens between the client and the authorization server where the client provides a key pair or registers a key pair with the authorization server so with every subsequent call the authorization server knows the payload that is coming in and that's going to contain both the token as well as something attached to it and the way this happens is during client and server negotiation there is a payload that is being agreed upon between the client and the server and this consists of a token binding ID which could be anything but the recommendation is that it be between 0 to 255 characters and the other thing that they negotiate is what signature scheme is going to be used for digitally signing the payload and this could be the algorithm specified on the slide and this makes sure that the authorization properly rejects an incorrectly signed payload because anybody could fake a signature and send it across more on that later but there are other metadata that is also available that the client can negotiate with the authorization server and this could be extensions binding type or any other custom payload that you want to add like a token binding message that only the client and the authorization are wary of for the purposes of this presentation and tie-ups with other protocols I am choosing to focus on the token binding ID and the signature schemes so what exactly happens during negotiation now the client has passed on the key pair and the authorization server has registered it so in subsequent calls what happens is the as usual the TLS handshake happens between the client and the server there is one additional call that happens which contains the SEC token binding header that is provided by the client now what this contains is the EKMI which is the exporting key material and this contains the payload and the message that was agreed upon between the client and the authorization server right now there is API support like if you're using nginx or Apache you do have this available you could extend it there is for Java 8 and 10 as well I have seen rudimentary applications for Django but wait until it becomes official and as you can see in the workflow you have the SEC token binding header and then once the authorization server is ready to communicate or provide or provision the client with the tokens then it basically attaches or connotates the token binding ID to the refresh token so for every subsequent message that comes from the client you not just authenticate or verify the identity of the client just based on the token that was provided which could be hijacked but you also depend on another secret that was shared between the client and the authorization server so some of you may be aware there is no browser support or there's dwindling browser support so what happens in that case you could still the developer could still go ahead and do a mutual TLS transaction and this is expected in the format of JSON web keys and it happens on a very similar note except instead of passing in the header you basically just send across the client certificate and then the authorization still has to register it still has to maintain a separate data store to have this context binding ID and then with every subsequent call that goes on you expect instead of expecting a header or like a connotated like a token you would be expecting a token plus the certificate that was already registered now if both the if you're not taken with token binding and you feel like oh there is no enough support for me then you could definitely look into a pixie raise of hands how many of you are familiar with pixie or heard of it awesome so pixie is proof of key code exchange this is available as one of the types of OAuth this is mainly aimed at mobile and basically workflows where the client secret storage becomes cumbersome but if you are looking for a context binding solution then you could definitely look to pixie as an alternative the only way the only way this differs is the initial negotiation call that goes out the client provides a code verifier which is a code challenge and also provides TM which is the transformation method so the authorization server knows a code challenge and how it was derived and that sets up the platform to expect any code challenge that comes with the subsequent workflows so as you can see the client then provides if say for access token request or even subsequently uses the access token then all it needs to do is send a code verifier along with it so again the assumption here is that you have the code verifier you don't have the access token or you have the access token you don't have the code verifier if you manage to sniff both of them then the situation is fuba so you saw a couple of screenshots before and we saw a couple of open redirect CVS there this kind of stood out to me because how simply it was achieved the way this happened was it was a manipulated redirection redirection URI and the only unexpected parameter that was past year was a percentage symbol the server side wasn't expecting an extra character so the validation completely failed but it didn't fail properly so it still managed to pass on the token to the manipulated redirect URI and the other thing which went wrong here is passing the redirect URI in the referral header so how do you exactly mitigate against this now the minimal a minimal viable solution here is that you extend your trust boundary to include your client you are choosing to trust your client that it's going to pass the correct redirect URI in the referral header and then the server is going to interpret it and pass the token along now that is the minimum viable solution here but if you want to increase your security then but you also want to support providing the access tokens to dynamic URLs then the other option that is available is to maintain a whitelist of URLs at the server side and you expect the client to pass the fully formed URIs and then you do an exact string matching on a list of URIs rather than one of them and but then the burdens on the developer like they have to go and maintain the whitelist it has to be robust and all the other things that comes along with that the best possible solution which I'm pretty sure will have some pushbacks is to maintain exactly one URI which is statically registered with the server you do not pass anything in the referral header and anytime the particular client interacts with the authorization server because of the registration that happened the server automatically knows where to send it so there is a one time compromise that can happen and subsequent workflows you're preventing that from being leaked. So that was OAuth and the two issues were token binding and redirect URI. Moving on for magic links. So what exactly is magic links? You are removing the burden of client secrets and you expect no passwords. The only thing the only thing the only secret that you are generating is a short lived hopefully good entropy secret that you are passing along either to the network provider or the email service who have the burden of storing it. And here you depend on again short lived secrets and you also depend on secure transport for the secret to go from your server to the whoever is handling it. So in case you know somebody who is going to be implementing password list or is taken by it make sure that they follow the best practices here or at least these are based on what I have seen so far. First one is make sure that wherever the secrets being sent or whichever entity it's being sent to make sure that the mobile number is verified and make sure that the email is verified. It's laughable how many times that the secret is shared for unverified email or mobile numbers. And the second part is when you are generating a token make sure that it has sufficient entropy and like Lakshmi mentioned before use good authentication protocol, good token generating protocol here and for example that is JOT. And even in that make sure that you specify who it is issued for and also make sure that you use the right algorithm even though MD5 is available as an option please don't head there. The third point is binding the tokens again extending the context binding from JOT or OAuth. Make sure that you keep track of where the request to generate a secret is coming from. Make sure that you have a store of whatever secrets you're generating either in full form or truncated just make sure you are able to identify it and track it and also make sure that you are tracking where the responses are coming from. One of the rudimentary checks you could do is that it's coming from two separate devices or if you don't want to do that just keep a track of like the source where it's coming from or how many times it's coming from. And speaking of how many make sure that you rate limit the requests that are coming in. Make sure that the secret generation it's a finite number if possible make it singleton but I know the usability cases where you have to allow it at least like three times or five times so make sure that you have a rate limit on that because it's super easy to enumerate and just generate the secret and do a phishing attempt yourself. So that's on magic links on to the next topic. Thank you Divya. The last part of the presentation today would be about SAML, Security Assertion Markup Language. And before we talk about the security issues, the remediations, the mitigations that we can have for developers, I would take this opportunity to set some context on why we decided to talk about SAML and what a SAML. So as you guys are aware, people are moving towards the cloud and according to recent independent research application developers on an average use around like eight different applications to do the data activities to support ID, HR, payroll, payroll systems and different internal operations as well. So we definitely need an enterprise setup and that's where a single sign on comes into play and what we have seen is that the underlying protocol which powers single sign on is a SAML and when you use a single sign on obviously you are not worried about like password reuse or multiple password stores because I mean attackers always try to hit the weakest link in the chain and you don't have to have multiple password stores and talk about like password ashing and how we want to like secure this password stores with network isolations as well. And I mean as you guys are aware like IAM is definitely like a challenge having like good authentication and authorization across the board is a challenge for organizations to come up with and I have also seen that the vendor support is more like a marketing factor or a marketing metric for single sign on vendors like Octa, OneLogin where they try to support or this is more like a marketing thing that they have where it's easy single sign on integrations for organizations. And what is what is SAML? SAML is nothing but an XML data but with multiple distilled signatures and assertions to it and distilled signatures are used to verify the authenticity of the service provider and also the authenticity of the identity provider which could be the underlying single sign on and SAML also provides authorization on top of authentication. They provide user IDs and metadata which could be like user attributes which can be used by your portal to do like authentication and also like make it as granular as possible based on your context. And if you look at SAML, SAML, SAML assertion is usually signed or signed SAML assertion is usually around seven to nine kilobytes of data which is pretty verbose for you or for a library to validate and like provide like the right authenticity and provide the right authorization. So let's let's dive into the the security issues that we have seen with SAML and as Divya and Lakshmi already mentioned about having like short TTR having like an expiration SAML does have something similar to that. It has something called as not before or not on or after which is basically the timestamp that your portal needs to like validate or like trust or honor a particular SAML assertion which could be which could be infused or infused into the SAML assertion that is signed by the identity provider. And one of the other things is using like a nonce which can prevent replay based attacks where your portal needs to cache the in response to ID that is part of the SAML request which is again sent back in the SAML response which needs to be checked so that we have more like a nonce that would prevent a replay based attacks where you should not have multiple SAML assertions create or a single SAML assertion create multiple sessions. It should be like one SAML assertion creating one one particular user session. And SAML is it's nothing but an XML file of distilled signatures so we are also applicable to some XML based issues which could be external references that could be part of your XML file which could lead to like a DDoS attack on our SAML endpoint which for example if you look at this particular diagram where you define an entity and you recursively call this entity multiple times which could like exponentially call it and that could lead to like a billion loves attack on your SAML endpoint which could like bring down your SAML systems in such a way that it would not be available for legit users. And also this could lead to external XML entity attacks where you can have like an XML file or you can have like an internal file like EDCO's name sent to a remote server if you don't have the right mitigations for XSEs which I would be talking in the future slides. And SAML has a lot of distilled signatures so I've seen SAML assertions with single distilled signatures so missing signatures is an attack that it's more like a fuzzing attack on your SAML endpoint where you remove the distilled signatures and see what your what your portal is how your portal is handling those kind of SAML responses or SAML messages and signature wrapping attacks is where you kind of like either clone or move your distilled signatures around so that to see if your SAML library or if you're if you're going to do it manually if you have like those code to kind of like catch those exceptions when like you clone signatures with your own public key certificate or you move signatures and assertions around. And one of the recent issues that we saw was about how we parse SAML like under the word if you let's say if you're using like a SAML library like PySAML or password SAML it uses like an XML tree to parse the entities the values the field values of our XML and this is where like the canonicalization algorithm comes into play where you can introduce like a comment and the way the XML tree parses it could lead to authorization bypass. So as you see as you can see like the left side is where there's an account where it's a fake admin at example.com but let's say if you can introduce a comment and if the XML tree parser breaks it down into like text and comments then you can have like an account or like valid credentials for fake admin at example.com but you can log in as admin and example.com so this is where like you have to make sure the underlying XML tree parser makes takes into account of like comments with like canonicalization algorithms. And now let's talk about the mitigations that we have for SAML based security issues. Definitely I've seen that we application developers use a lot of SAML validation libraries like as I already mentioned PySAML Password SAML so using the right validation library which are which can catch these exceptions that we spoke about all the security issues and which is which has been like patched for all the security issues is one of the mitigations that you can have to to come up with a good SAML implementation and also disabling external entities is very important because when you have like a SAML endpoint you don't have I mean in most of the cases you don't need to like have external references because SAML is more like an encapsulated file which has everything to verify the authenticity of a particular particular identity provider so you can disable the external entities to be false so now we are not vulnerable to XSEs on our SAML endpoint and the other thing is to prevent like a DDoS attack like a billion loss attack you need to have some kind of a memory constraints memory restrictions that's where you can have like a you can configure a security manager at your in your endpoint that will either turn off or limit the entity expansion so now you have some memory restrictions so that you hit when you hit a particular memory computation power then the the SAML particular the particular function is going to like break don't have like a DDoS attack on a SAML endpoints and configure a security manager is very is very important to prevent billion loss attack and also look at other alternatives I think the industry is also moving towards open ID connect where it's tried it's trying to come up with more like a jot after your single sign on where it's less verbose it's easy to interpret and correlate what the metadata of that particular token is rather than looking at a file which is around like seven to nine kilobytes so look at alternatives for your single sign on. So now I think we have covered different authentication patterns and we just wanted to kind of brief on like the common commonalities that we have seen across different authentication patterns let it be jot or it's a SAML or possible as so some of these are some of the things that we would want to brief about which is which are token metadata so look at token metadata and token metadata can be used to provide very granular access controls and also like granular authorization for your portal and the choice of algorithm is very important like based on the use case if secret sharing is easier then you can like look into a symmetric encryption if not then you can look at public encryption so algorithmic choice is very important as well and insufficient validation so any kind of authentication pattern comes with comes with multiple layers of defense where like there are multiple validations in place so look into like combining validations not just like rang or just one validation program incorrectly where we could like lead to like bypass issues or author authentication and authorization bypass issues and missing context is also very important because we have seen that in jobs and as well as in passwordless so you do need to understand the context of your application so that you can decide which of the authentication approach would be the best for your application and also understand which like which kind of encryptions of which kind of algorithms you can use for your for your applications and yeah I mean like just to conclude I think there are multiple ways to authenticate and we have not touched on like all the different authentication patterns but we did pick on few and we wanted to talk about them talk about the bad practices talk about the mitigations which we which we have done and also we have seen that being application security engineers we also work very closely with developers educate them empower them so that they can like securely have like the most secure authentication in place so yeah this is what's our talk and we are now open for any Q&A's the question is the question about like distilled signature bypass or authorization bypass for only for .NET I think we we have come across like very common use cases where there was a signature bypass that was reported by duo security where like most of the SAML libraries whether it's octa or one log in where you can like include like a common and like do like a signature bypass but I don't have any specifics for just for .NET I can definitely look into that cool okay I'll do that thank you okay the question was about how do you prevent token stealing so I think in JSON web tokens okay so token could be stolen I mean sending over HTTPS is of course sending over TLS is a pretty good prevention as such but even if tokens so I think that's where the other parts of the defense come in even if tokens are stolen you still like can have like some of the things in place where oh okay the problem is with respect to like that's where I think what she spoke about came into picture like avoid like leaking tokens through reference or like any of the other headers so other compensation controls come into place as such because we cannot prevent like I mean any kind of token right even whether Jot or not but one of the things you could do is also not have too much sensitive information inside your JSON key value pair because anyone who sees gets the token can actually view all of the details so it's mostly compensation controls around any kind of token and that applies to Jots as well yeah I mean I we can definitely do that but since I mean I say aware since a payload is base 64 base 64 encoded like anybody can decode so we don't want to put anything that is very sensitive so one of the things that I've seen is using the jit so what we have seen is like let's say if the jit if this particular jit and if that particular if there are like lot of requests with this particular JSON web token but it has like like there are multiple times the signature was invalid then we have some kind of like IP racking system to kind of like did it those in places where this particular Jot with this particular jit is kind of like fuzzing on the digital signature and we want to like identify those and that's the thing where we kind of like have a pipeline of like identifying those and put it into a central system so that we would say that okay these are the bad jits that are out there so don't trust them and like probably like work on like global lockout for that but yeah I've been to answer your question we can definitely put like a source IP like how AWS does for its IAM roles we can definitely do that but we just that we don't want to put a lot of data like a lot of PII or sensory data into into like a JSON web tokens because we don't encrypt it we just send it over TLS so any service can base 64 decoded and like look at that but yeah that I think we can definitely look into putting like a source IP doing some IP white listing as well so if I understood correctly you wanted recommendations on token binding for mobile apps so okay I hit it bring it up so I wanted to include in the presentation but good that you brought it anyway so the way it happens is on mobiles there is like two agents working on it so if you follow the best practices and keep them separated the code verifier and the token as they are meant to be the event of compromises lessons so I wouldn't how do you start secrets right now like if you're using general OAuth for mobile devices like where do you store them yeah that is correct yeah you could always reverse engineer and get all the secrets that are baked in but the but the security that Pixie brings in is that it's two agents kind of communicating and doing the traffic workflow so even in the event of one compromise the entire system doesn't get compromised so I would say just storing both of them in separate apps or separate agents would still give you that security but eventually all of them would get reversible unless you want to invest on another server where the mobile actually communicates with that and then the secret is basically removed from the mobile device entirely that would be an overkill yeah the dynamic use case with OAuth is always a little bit iffy so pre-registering always removes that attack vector in case if it is dynamic then you could definitely do composition controls on the server end yeah if you have other questions you can definitely reach out to us but we are out of time so thank you for being a great audience