 My wife asked me, do you feel like it's your birthday when you have to work? No, but this is a great evening, isn't it? Oh, hi! I'm in the post office as well, and we have been instructed to deliver you this postcard and wish you a happy birthday. I guess they hacked my phone, now I told them because I brought some cake to my fellow Haralds because it's my birthday, so thank you Haralds, thank you post office. Wow, it's so wonderful that these services all around, big applause for the post office. And then for the thing you're really here for, of course, which is the great talk. So I've warmed up the audience together with the main post office, warmed up the audience for you a bit already. Ladies and gentlemen, we're gonna start with our next speaker. I won't tell him about him that much because he will introduce himself, but that's giving at least as warm welcome as you gave me. So ladies and gentlemen, Matthijs Mehlissen. Good evening all. I'm going to talk tonight about single sign-on protocols as seen from the perspective of a hacker. First, a little bit about myself. I'm Matthijs Mehlissen and I started my career a long long time ago as a developer, a PhD developer in a small software company. After that I moved into academia. I started working as a researcher at the University of Birmingham and the University of Luxembourg. I did research on single sign-on protocols there. And currently I'm working as a hacker. I've been working for more than seven years at CompuTest now, a security testing company in the Netherlands. You might have seen our tent already. It's the tent at the very end of the field with all the arcade games inside there. If you haven't, feel free to drop by. So CompuTest is a full service security company. I myself work as a security tester, but we also do incident response. We do monitoring and we do government services. That's briefly about me. So single sign-on. That's what I'm going to talk about tonight. What is actually single sign-on? I guess many of you have logged into the Stack Exchange website sometime and I'm going to demonstrate how single sign-on looks like from an end user perspective on the Stack Exchange site. So if you want to look log into Stack Exchange, you're clicking at this beautiful login button there. You're presented with the next screen in which you can choose between a couple of single sign-on providers. You can choose, in this case, Google, Facebook, or Yahoo. You can also choose the own Stack Exchange single sign-on provider. For example, when you click on Google, you're forwarded to Google. You can also see in the URL bar that we are at the Google.com domain now. You're entering your Google credentials and Google provides back the user's identity to Stack Exchange. And we are logged in at Stack Exchange now. So that's briefly what single sign-on looks like. I already mentioned a single sign-on provider can also be a website that's run by the application itself, by the company itself, and that's a trend that we really have seen a lot in the last couple of years, that instead of building one gigantic application containing an authentication module, what companies are doing is they're moving authentication out of their application towards a single identity provider or a single sign-on provider, which is good because it means that you don't need to worry about authentication within your application anymore. So why would people move towards single sign-on? There's a couple of advantages and, of course, also a couple of disadvantages. First of all, single sign-on makes it quite easy to use for the user. You can look into a lot of different applications using the same credentials. So you don't need to create different credentials for each application or store different credentials for each application in your password manager. It is also quite easy from the perspective of the person managing the web application. Because what you can do is there's only one single place only in the identity provider where you're storing users' identities. So if you're running multiple websites, there's a single place where you can manage them all. And a final advantage is security, with a big question mark behind that, of course. It's somehow nice from a security point of view to have identity provider handling for your authentication. But the reason that is nice is that it means that all the difficulties of authentication and logging in, you only need to handle at one place. But, of course, every advantage comes also with its own downside. And that means that you need to build some kind of protocol that connects the application to the identity provider. And in particular, you need to have something on the side of the application that handles the response of the identity provider. And what we are seeing as security testers in my daily life, I see this going wrong a lot of times. And the problem is, if it goes wrong, you have immediately serious consequences because it immediately means that you can log in to the application. So an attacker can immediately log in to the application if there's a mistake there without having the right credentials. So what would be the simplest possible solution to solve this problem? And I think that's this very simple protocol. What we're doing is we're entering as a user, a username and password to watch the service provider. That's the application we would like to look into. We have a username and passwords that is forwarded by the service provider to the identity provider. The identity provider checks if these credentials are right. And if so, tells the service provider. Very simple protocol, also perfectly good protocol. As long as you are okay with the fact that it's a perfectly fine protocol, as long as you assume that service providers are trustworthy. If you build a setup like this, if one of the service providers gets compromised or gets malicious, they own the credentials of the user, so they can also log in to all the applications that are connected to the same identity provider. In reality, often in security, you would like to have segregation. So you would like to have the guarantee that even if your service provider gets compromised, all other service providers remain secure. And this is the reason that you would like to have a single sign-on. And that you would like to have a proper single sign-on protocol rather than a simple naive solution that I gave here. So what are we going to talk about tonight? I'm going to talk about two protocols. The two protocols that we see the most in the wild, SAML, which is an XML-based protocol, and OAUF and OpenID Connects. I'll tell about the relation between these two later. And for each of those protocols, first, I'm going to show how it works, and then I'm going to show what are the most common attacks. I'm not going to demonstrate any new attacks today. Today's attacks that I'm going to present are all attacks that are very well-known by the people that look into these protocols every day, but apparently not so well-known by developers. So we can find these attacks quite commonly in the wild. Like I said, I would start with SAML. And in SAML, we have three different roles. We have a service provider. That's the application that the user would like to look into. And we have an identity provider. That's the server or the service that checks the user's identity. And, of course, we have a user that tries to log in through his browser. And the steps of the protocol are actually quite simple. The service provider, if the user wants to log in, then the service provider sends an authentication request through the user's browser to the identity provider. Note that all messages are passed through the user's browser. The identity provider then checks the user's identity and sends an authentication response back to the service provider. And if the service provider receives that authentication response, then the service provider can check that it indeed is the right user that's logged in. So I'm going to show now how these messages look like. An authentication request, like I said, it's XML-based. So an XMAS authentication request looks like this. We have in there an ID. We have the name of the provider, the destination, and we have also here we have the issuer, which is the service provider that generates the request. If the identity provider receives this message, then it responds with a SAML response. In the SAML response is an ID. Again, there is a destination, and there is an in-response-to-value here that should be equal to the ID of the request that we just saw. Also, there's the name of the issuer, the name of the identity provider. There's a status code, success in this case, which means that logging in succeeded, and there is an assertion. And the assertion looks like this. It again has an ID that has the name of the issuer. It does have a digital signature. And this, by the way, is funny because digital signatures contain a URI that refers to the part that's actually being signed. So in this case, it says this URI is being signed. So the entire thing that signs contains its own signature, which when I first thought, this is funny, you cannot put a signature over something that contains that same signature, but actually the protocol takes this into account. So it first drops the signature before actually verifying the signature. Moreover, what we have is a subject. That's the name of the or the ID of the user that's trying to log in. We have conditions, for example, for its audience, is it valid? And when is it valid? We have an authentication statement. When was authentication happening? And this is important. We have the attribute value, which contains, in this case, the email address of the user that's trying to log in. There can also be other attributes. These messages, of course, need to be sent to the other side somehow, and there's two different ways to do that. There are URL bindings, and there are post bindings. What we do, we take the message, we take the XML, we deflate it. That's a compression algorithm. We base 64 at URL encoded and pass it in the URL. Or we can do exactly the same thing, but in a hidden form field. So what kind of things can possibly go wrong here? First of all, what we can try to do as hackers, execute a man in the middle and try to get in between the service provider and the identity provider, intercept the messages and see if we can do something with that. And actually, in this case, where we use HTTP, there's nothing that prevents us from doing that. Summall doesn't care about transport level security. Summall doesn't encrypt any messages that are sent from one side to the other. It does have signatures, but those signatures don't guarantee confidentiality, only integrity. So if there's no TLS, we can intercept all messages, including the Summall response and use that Summall response to log in to the user that's trying to log in. So always, always, always run Summall protocols over TLS. Fortunately, most of the implementations that we see actually do this nowadays. What else can we do? I already showed you before that there is a signature in these XML messages. Of course, if there is a signature, we want to break that. And this is something that actually works surprisingly often we see at our customers. We surprisingly often see that there is a nice signature in a message, but there is no process on the other side that actually verifies this signature. And my theory of why this happens so often is that developers mainly care about making things work, and there is nothing that doesn't work if they write an implementation that doesn't check the signature. Other theory I have is that perhaps they might comment out the line that checks the signature at some point, then don't notice that or then they forget that they ever commented this out and then forgets to uncomment this. Anyway, I don't... These are two hypotheses about what are the reasons, but we do see that signatures checks are very often just ignored. In addition to checking signatures, what we can also do is check if unsigned messages are accepted. And that's not exactly the same, right? So if you check if the signature is verified, we just change the message, see if it's still accepted. What we can also do is just drop the entire signature, drop the entire thing from the XML, and see if the application likes it. What we also can do is self-signed certificates. These signatures are actually signed by a certificate we can check, and we can check if the message is encrypted rather than signed, because for some reason developers also confuse these two quite often. My third attack that I'm going to discuss is also a quite fun one. It's XML signature wrapping, and it's based on the way... It's based on a parser differential between getting the ID that's trying to log in and verifying the signature. And what I mean with that is I'm going to show this in this quite complicated XML overview. So on the left is the regular message. On the right is the message that we are using if you're going to attack this. What we are doing is we are wrapping the entire thing into our own summer response with an evil response ID. It doesn't really matter what it is. We are changing that, so we are entering here the ID of the user that we would like to log into. And we are changing the signature. So what we are doing is this, or I'm actually not changing the signature. So what we are doing is this signature refers to the green block. The signature is still valid. And what we see in some implementations, and this is also something we actually do see in practice, what we see in some implementations is that if you send a message like this that the part that is verified, that's the green part. And the green part has a perfectly fine signature. And then there is a next step in which we are going to check which user is actually trying to log in. To do that, we still take the outermost seminal assertion, we check which ID that is. But that is not the idea of which to just check the signature. So this is XML signature wrapping. Attack number four, attacking the XML parser. And many hackers that have ever worked with XML will find this familiar. There's a lot of nasty things you can do in XML. For example, you can define your own external XML entities that might refer to internal files, that might refer to internal network locations. And you can use this to read the content of the network location or read the content of the local file. Nothing special about that, just the normal XML vulnerabilities that always work, also sometimes work here. Then we have login cross-site request forgery. And I hear many of you thinking, okay, login cross-site request forgery is not actually so serious, but I'll show you that in the case of SAML, this can be really nasty. Because we have, let me first quickly repeat what login cross-site request forgery means normally, is you can do cross-site request forgery against login functionality, which means that as an attacker, we can login ourselves on the device of a user. So that means that the user opens his laptop, cross-site request forgery happens, and the user finds that the attacker is logged in on the application on his device. Or perhaps he might not find it, he might not spot it, and he might accidentally enter data in the application, not knowing that he is entering the data to the attacker accounts. So again, what we can do, we can login ourselves as the attacker on the user's device, not the other way around, that would be much more serious, of course. But the nice thing, and I mean from a hacker perspective, what we can do is we can also connect our Facebook accounts. Many applications have functionality that let you add an existing account, an existing single sign-on account like Facebook to your account. And by pressing at Facebook account, it's running through the normal login flow, the normal semiflow, but instead of logging in, it connects your Facebook account to your existing account. And if you can see as a ref this, what that means is we can connect the attacker's Facebook account to a user's account, which means that in the future we can always use our attacker's Facebook to log in to the account of the user. So login CSF can be very serious and it's something, especially if you have connecting functionality, that needs to be taken very seriously. How does it actually work? Basically the same as login CSF always works. So what do we do? As an attacker, we connect to the service provider, we say we want to log in onto our own account, the authentication request gets forwarded to the identity provider, we log in with our own accounts, with our own attacker credentials, and now we have an authentication response that belongs to our account. And then we forward this authentication response to the user agent that might be, for example, a user, that might be, for example, a link in which the user agent or the user can click. If the user agent is tricked into clicking on that link, then this link will be followed in the user's browser. It's going to go to the service provider where the user is going to receive a valid session for the attacker. How do you actually fix this? It's very simple because there is a default way of fixing this in SAML, and this is something I just explained. I quickly mentioned it because we have this in-response-to-value here. And in-response-to-value, I already mentioned, needs to match the ID. And this might be something that looks a bit as a familiar pattern. Maybe some of you guessed this already, but yes, this in-response-to-value pattern is actually just a simple normal CSRF token. That functions exactly the same. So the trick is, on the side, of the service provider, we need to save this ID in the user session. And we need to check if in-response-to-value comes back that this indeed matches the ID that we sent out earlier. Then the last attack for SAML that I would like to discuss is being a malicious service provider. And this is something that I started this talk from. The reason we use such complicated protocols is because we want to protect ourselves against service providers that get compromised. So then, of course, the question is, does someone actually protect against this? Do we have to guarantee that if one service provider gets compromised, that all the other ones are still secure? Fortunately, we should have this guarantee if the implementation is correct. So what is going on? We can try to attack this. We can start a session as a malicious service provider with the service provider. So we just let ourselves log in and get authentication requests. As a malicious service provider, then we are going to start a session with the user. So we have just a normal flow authentication request to the identity provider and authentication response back to the malicious service provider. And now we have an authentication response that's meant for us, for our service provider. What we can do is try to forward that authentication response meant for us to another service provider and see what happens. And I can tell you most of the time this is not going to work because, remember, this authentication response contains the name of the service provider. The service provider is probably going to check what name is in there. This check might be forgotten. I think Google did this, forgot to check sometime in a second. There was a serious vulnerability at Google. Anyway, probably the service provider is going to check the authentication response and the name of the service provider in there and we'll see that this message is not intended for him. We can always try if it works because who knows, maybe we forgot about this check. So that's the summary of the first half of this part, first half of this talk. We can try executing men in the middle attacks. You can try to bypass the signature check. You can try to do XML signature wrapping. Attack the XML parser. Log in CSRF and you can try to be a malicious service provider. These are six of the most common or most interesting attacks. Of course there are a few others but these are the most interesting to present, I think. Let's continue with Oof. Do you guys know what Oof stands for and people always respond? Yes, of course I know this. It stands for open authentication. Everyone says it wrong. It stands for authorization. And why I mentioned this is this is actually important. Because the purpose of Oof is not to be an authentication algorithm or an authentication protocol. The purpose of Oof is to allow an application to access resources of another user on behalf of a user. So for example we can Oof, we can use Oof to allow an other application to access our profile page. Let's have a look how it looks like. It's not XML based this time, so all messages are a lot shorter. They even fit all on one slide instead of the five slides I needed for some. So what's going on? We start. We have four calls in total. A user authorization request and an authorization code grant. These are passed through the user agents. And we have then a backend call that's not passed through the user agent but straight server-to-server an access token request and an access token grant. So these are the backend server-to-server calls. First probably I should mention what roles we have in Oof. Because they basically roughly function the same as Oof. But of course they use different vocabulary. So what we have in Oof is we have a client that's the application that would like to get access to a resource like the user's profile. And we have an authorization server that's the server that checks the user's identity. We start with a user authorization request. And in the user authorization request we send a message straight right to the author with the request for a code an authorization code. And in that request we also include a redirect URI that's written here client.com. which is the URI that the message is being sent back to. So what the authorization server is going to do now like it should is going to check the user's identity and it's going to check if the user actually authorizes the client to get access to the for example the profile request. So in most Oof flows you always see such a check. Then if the check is passed then it's going to send indeed a message back to the redirect URI that we passed including this code. And this code is an authorization code which gives the client permission to continue the rest of the flow which is the server to server to server call client to authorization server containing the access token request. And how that looks like is we are passing the name of the client ID, the name of the client secret and we are passing the code that we just received and the server is going to respond to that with a piece of JSON that contains an access token and the access token is something that can be used for example to call an API that contains the profile information. So as you see here we have a two-step process first we are passing an authorization first we are passing an authorization code and this authorization code can be exchanged for an access token. Let's continue to the attacks. First attack, be a malicious client. And I already mentioned earlier in the summer case, summer protects against this by default. So of course Oof is going to protect against this as well right? Or wrong? There is no protection whatsoever present in Oof of against malicious clients. Oof doesn't offer any protection against clients that forward the token to other clients. And that's actually by design because like I started, Oof is not an authentication protocol or if it's an authorization protocol you can for example authorize the user to share his profile and if you authorize that to it wouldn't be weird for that client to also pass that authorization on to other applications. After all they could also directly share the content of the the content of the profile. So important to remember in Oof no protection whatsoever against clients that forward tokens. So there is a solution for that fortunately and that solution is OpenID Connect. So OpenID Connect is a small extension built on top of Oof and JWT JSON web tokens and it looks like this and it's actually very simple extension to Oof because what we have here is we have here an ID token this one is added to the access token grant and this is actually JWT so this is a science message generated by the authorization server that contains importantly the audience and what I mean with the audience is the client that the message is being used for it also contains some other information including the user's identity and because we're adding the audience now the audience is sent to the or the message is sent to the client and the client cannot longer forward or of course he can try to forward it but a good client would check that he is not the audience and rejects the message of course as a security specialist something we should always still check for if this check actually happens but OpenID Connect is intended to be used for authentication so something already briefly showed you is that if you have an Oof loader should normally be a screen in which actually authorize the application to get access to certain data or to login some applications or some flow skip that step which is nice for a security test or as hackers because it means that we can make any client have the user login to us and obtain the identity of any client so as a client we can make any user login to us under some conditions of course so therefore we have this permission view something else if they are so smart enough to actually have this check yet there is something we can do we can try to do CSRF on this button and see if that works because that's also fun and accomplishes the same then login cross-hatch request rotary this is something that we saw in Summell as well we saw that Summell has protection against this and OpenID Connect Oof it's the same in this case also offers protection against this it's also something that's implemented wrong incredibly often we very often see that people just do not bother to actually implement this check which is very nice especially again if you have the functionality to connect new accounts to our accounts so how does it work the attacker goes to the authorization server logs in with his own credentials the authorization server responds with the authorization code grant now we have a code grant for our accounts we pass that as a CSRF attack we pass that authorization code grant link to the client so we send the link to the client and hope that the client will click on that link and then we are being used we are logged in into the client or we are logged in into the client so the attacker is at that point sorry at that point the user is logged in because it's just going to continue the same flow access token request followed by an access token grant like I said someone has protection mechanisms against that's in response to value OOF also has protection against that and that's done by means of a random state that's added to the user authorization request so what's important is that the client when it receives an authorization code grant that it always verifies the state variable in that request with the state that it stored in the session then next attack redirect tokens back to us in this case we have a link we have an authorization server where we can log into and remember this one contains redirect URI normally this points back to the client but we can also try to change this link and try to change this link so that it redirects to the attacker so we can pass this to an we can pass this to the authorization server and hope that we actually receive a redirect that redirects the code that we need to log in back to us as an attacker you can even make it a bit more complicated and that's what this guy actually did this guy succeeded in attacking github couple of years back and the way he did it was by path traversal so he tried first to enter his own attacker domain into redirect URI that didn't work because there was a requirement that the redirect URI started with github.github.com what he did was he did path traversal he went up a couple of directives and then moved to his own home directory because we remember we had github.github.com and at github.github.com you can place your own content what happened was that the tokens were indeed redirected to his own home directory at github so redirect URI is very tricky best way to implement that is probably make sure that you have the exact that you have the exact same string like you should probably define your own string, define one particular string that's the only string that's being accepted that would probably be the most secure way to accomplish this then an extension quite late a relatively new extension to OpenIDConnect.io is called Pixi and what Pixi does is this is an extension that prevents leaked authorization codes from being useful. Why do we need that? Especially on mobile applications we often see some problems there's basically two problems with mobile applications first of all they don't have a backend so they don't have a secure place to store their own client secret and second of all redirecting back to a mobile application is very tricky in many instances it's possible to install your own mobile application as an attacker so then there's a malicious application that's registers the URL to which the user is being redirected and then at that way the attacker can intercept malicious codes so that's the reason for Pixi make a way or design a way that prevents authorization codes from being useful and what does it actually offer more precisely if there is a man in the middle on the authorization requests then we are still secure you can prove that even if an attacker is able to read the authorization request then there's still no attack possible and the claim or the guarantee for the authorization code grant so the response to that is even stronger an attacker that intercepts the code or even an attacker that can modify the codes will still not be able to log in to the user's accounts so what does Pixi look like we start with generating a code verifier that's just a random string and we calculate the SHA-256 hash of that and that will be the code challenge and this code challenge we pass along with the user authorization request and at the moment that we actually submit the access token request we include the code verifier and this code verifier that's the original string is also sent to the authorization server the authorization server can now compare the two and can compare that basically can check that this is indeed the SHA-256 hash so once again the reason this is often done or sometimes done is because this offers extra guarantee against authorization codes that get leaked what can we do as a pen tester if you see this definitely check that this code verifier actually is checked because of course I'm quite sure there will be many instances I don't think I've ever spotted an impractice but I'm quite sure there are as many instances where people have forgotten about this check so to summarize again a lot of different vulnerabilities in OpenID Connect what can you do as an attacker you can try to be a malicious client you can try to bypass the permission of the user you can try to execute login CSRF you can try to redirect tokens back to SSD attacker using the redirect URI you can check pixie verification so basically the conclusion is single sign on is very nice because it really does provide a lot of advantages but it is also very tricky to get right doesn't really matter if you use SAML or OpenID Connect there's many small tricky things that if you forget to implement it correctly you have big problems such as the attacker being able to login on the account of arbitrary users so I think a general recommendation this applies to a lot of security stuff in general but I think to single sign on protocols in particular do not build your own implementation what we see at our customers if you see your vulnerability we often have a little chat with the developer hey what guy have you guys done how come you actually implemented this like this and very often in that case they just build their own implementation with two or three lines that just check the necessary stuff but don't actually check any security don't actually implement any of the security requirements so please do not build this yourself if you're going to use single sign-on use an existing implementation both on the side of the identity provider there's many good open source product side of there I believe key cloak is quite good and also on the side of the application that actually receiving the message that's coming from the single sign-on provider also on that side don't think it's just two lines of coins if you want to build this yourself please use something that your library provides or uses some existing well-known framework and finally I think this is the most important or this is one of the important things many developers don't know that the reason that you have single sign-on is that you want to protect against malicious clients so in the entire process keep in mind that the entire goal, the entire reason you're using protocols like that is that you want to stay secure even if one of the clients gets compromised alright that's what I wanted to tell about single sign-on I'm curious if there's any questions thank you any questions please walk to the front I need people know already how it goes throw the first question, go ahead hey, a question about Pixie and why it's actually helpful if a client can't assure the confidentiality it's a little bit difficult to understand also because of the music from the outside hmm I'll do my best if a client can't assure the confidentiality of an authorization code grant what makes it able to assure the confidentiality of a code verifier secret from your description it sounds like these are both just static secrets alright so you're talking about Pixie right can you repeat the question once more yeah so in the before Pixie we have an authorization code grant that we receive back to protect the confidentiality of that grant for the duration of the lifetime of our application and that's just some static secret that we store and here we're adding another static secret that we store and it seems like we would need to protect that in the same way so how does this actually help can you repeat what the attack scenario you're trying to protect against well I guess I don't see what attack scenario Pixie is trying to protect alright so I understand your question so the attack scenario is the case so basically at this point in the protocol we have an authorization code and we need to get it back to the user agent somehow and in mobile applications we have mobile applications we have difficulties getting this back to the correct user agents so there is quite a risk if you send this code back this ends up at a malicious user agent for example a wrong for example an application installed by the attacker and what Pixie protects against now is even if the attacker intercepts this code at this point it still will not be able to continue with the flow because it does not know the code verifier because the code verifier is a shahesh of the code challenge so the attacker will know the code challenge but it will not know the code verifier so it will not be able to go on with the protocol after the third step I see so it's more about concerns about confidentiality of the link between the user agent and the auth server and in both cases we're assuming the client to be fully trusted that's correct thank you thank you for the question any more questions yeah I see one coming up back from the signal I think people are watching online so there was a question wouldn't matching the client ID or client secret against the ID for which the token was issued prevent both the relay and the lock in CSRF attack which is something only the auth server that means identity provided I would have to do I'm afraid I'm going to have to ask you again to repeat the question because these are quite technical questions which I need to hear twice okay wouldn't matching the client ID or client secret against the ID for which the token was issued prevent both the relay and the lock in CSRF attack which is something only the auth server identity provider would have to do I don't think so this kind of things are always really tricky so if I say yes you can do this for sure I'm sure someone is going to implement this and end up quickly I don't think it's going to work because probably there's still a message that you're going to be able to send to the clients I think of the top of my head that you still will be able to send the same message back to the client because the attacker is still going to know these tokens I think but this is probably something I would need to write out feel free to if you're really interested feel free to drop me an email and then I can think a little bit longer right probably it's not going to work the way you described people here around mean they can visit you afterwards but your email address would then be on the first slides thank you well thank you any final question no there were some tough questions I agree that it's difficult to top those in general for security protocols it's always really difficult to say I changed a tiny thing is it still secure or not this is really the question I cannot give an answer to straight away if you have more questions feel free to ask Matthijs but first let me give him a very big hand thank you very much alright thanks