 And welcome, here we are at another OAuth happy hour. Back again this week, I'm Aaron Parecki and this is Micah Silverman here. So we got more updates to share and of course more time for more Q&A, so feel free to drop questions in the chat if you do have any. But otherwise, we will go ahead and just dive right in. Yeah, so I know you had a OAuth 2.1 this last week. Yeah, exactly. So yeah, we're in the middle of our series of interim meetings, which is a new thing the OAuth group is trying, which is instead of trying to cram all of our meetings into one week during the ITF week, we got a lot of specs in progress. Instead, we are spreading them out over the course of about six or seven weeks. So this week was OAuth 2.1 and it went pretty well. The general idea of OAuth 2.1 is it's consolidating all of the various OAuth specs that have kind of grown out of control for over the last 10 years. So the main thing that we covered in the meeting was updates since the last meeting, which was I guess six months ago now, which time flies, cause it doesn't feel like it was that long ago. And so we always try to catch people up on what's new, but really the main thing that had changed was we got two very, very good and thorough reviews by Justin Richer and Vittorio from of the original of that first draft that we had originally published. And they had a lot of good feedback of like editorial changes, just finding inconsistencies, finding things that were weird because they were copying and pasting from several different specs and kind of throwing together and then had a lot of good thoughts and suggestions. So a lot of that, I made it through about half of each of their feedback, the first six or seven sections of the spec. And actually a lot of their feedback I just went and incorporated immediately. I just changed, updated the spec to match because it was like nothing controversial. It was mainly like, this would sound better this way. And I was like, yeah, that's, yeah, it would. So I just went and did that. Anything that I felt like would take more than like 10 seconds from me to go and actually write and make a change. Or if I thought it might actually have some, there might be differing opinions on whether it should change. All of that is opened up as issues on GitHub now. So there's a whole bunch of, whole bunch of issues based on their feedback in GitHub. So that's all stuff that needs more discussion. And that's whether that's because it's, oops, gosh. Oh yeah, I don't have a screen share. That's what's going on. Yep. That's what I wanted to do. Yeah. So I added tags for them. This draft OO feedback is anything that's opened up from their specifically their feedback. And some of these are just like, I, yeah, I agree. It just, it's gonna give me a lot of work. So here's a good example of one. Move this section in with the other grants because that's where it makes sense logically. It's a lot of work to just move text around. It turns out it's not easy. But then some of the stuff is like, you know, clarifications of like, how do we exactly want to handle this clarification? Some, here's, here's one that's interesting. Warned against structured client IDs, which is like, doesn't sound like a big thing, but it is tricky to actually find a good way to make that make sense. So there's a lot of good stuff in here. Client IDs with a pattern or something. What's a structure client ID? Yeah. So this is from Vittorio. The idea of a structured client ID. Well, most, I think most places don't use structured client IDs. It's just a random string. It's just like a series of random letters and numbers. But any sort of structure can start to leak information or be able to be guessed, which can have bad side effects. So sequential numbers, for example, we generally know that using sequential numbers for identifiers of things, especially if things are not all public by default, that's not a good plan, right? You don't identify, you don't put sequential numbers in your URLs to identify records because then that lets people enumerate your data set. And it's not hacking, but it has been called hacking in several security breaches that were discovered because people put sequential numbers as their IDs for records that were not supposed to be public. So that's one example of a structure. But Vittorio's example would be like developer name plus a region plus a serial number is like anytime you've got a template where it's like here's the region or your specific data center or whatever, which can again start to leak information and cause problems that may not be immediately apparent. So that is, yeah, that's an example of like, that needs some more thought. That's not something that you can just kind of throw in there in a couple of seconds of writing. So there's a lot of that kind of stuff. And, but it's all, it's all good feedback. So we just have to work through these. All these are open for discussion as well. Like, you know, feel free to chime in on any of these threads totally fine to do that. Yeah. You know, I, to your point that you made earlier it's largely about aggregating the best current practice under one umbrella. But one thing I was curious about how it's evolved and what kind of conversations you've been having were over the introduction of some new terms like the credentialed client type, for instance. How has that been received? And, you know, is it among the things that you've had to like work on? Is it just kind of pretty obvious or how's that going? That one in particular hasn't really been controversial. I think the only sort of pushback on that is people are generally wary of trying to make things more complicated, right? Because the whole goal of this is to simplify it. So new terms are, it's easy to start making things complex when you add new terms. But the goal of that term specifically is to clean up a lot of other language in the spec that is otherwise confusing when it doesn't have a name. And that's the whole argument with that term is that it's not a new concept being added. It's actually already in the spec just kind of being danced around with language. It's not like a formal, it's never been formally addressed before, I guess. Yeah, it'll say like this, it'll have phrases like if the client is confidential or has credentials. And elsewhere in the spec, it's like confidential clients have credentials. And then you're like, public clients don't. So what does the sentence mean if it's either a confidential client or has credentials? So that kind of confusion we're trying to, the idea of calling it a credential client is supposed to be like, yes, this is the thing we know to expect. Here's examples of when this might happen. So it might be worth reiterating for anybody that may have joined up until now, there were two formally defined client types, right? Public and confidential. And now maybe we were speaking to those two and speaking to what credential that actually means. Sure, yeah. So confidential clients, the idea is, that's a client that will have a client secret, but it's also a client that can keep a secret. So typically it's most often gonna be apps running on a web server. So when the application code and any configuration that you put into the application lives on a server that the developer is operating, right? It's like not visible to the public. It has the ability to, you can put API keys in there, you can put secrets in there. You do that very often. It's, you know, you put in your, all sorts of API keys go into your config files, right? And they stay secret. And the opposite is public clients, which are anytime when you can't keep things secret. And that's primarily gonna be single page apps and mobile apps or desktop apps. Because if you tried to put in an API key into your JavaScript app, you'll discover very quickly that anybody can see it. And then it's not very secret anymore. So you can't use a client secret for a JavaScript app because it wouldn't be secret anymore. So those are the two extremes. And those are the two types of clients that OAuth has typically been defined around. And that's great and all, but there is something in between. And the thing in between is when the app has a client secret but it's a client secret only for that particular instance and it wasn't deployed with that secret. So the easiest example of this to imagine is a mobile app where you don't put the, you don't put a secret into the mobile app at compile time. You compile the app with no client secret, ship it to the app store, somebody downloads it. And then the first time the app launches, it says, I need a secret and it goes and makes an API call to the OAuth server to get a secret. And it might identify itself with a public client ID and but there can't be any other authentication in that call because there's no way to ship any authentication into the app. So the first time it boots up, it'll go get a client secret from the OAuth server and then it can store it. And now the thing about that is that that client secret identifies and it can authenticate that particular instance of that app from then on. So anytime that that app then makes a request to the token endpoint to get access tokens, the OAuth server knows that it is actually that same app coming back again and again and it's not some other instance of the app. So you can do things with that like deciding that or even constraining refresh token use so that refresh tokens can't be stolen anymore out of that app because you would also need to steal that client secret. You could set different policies around it or different token lifetime policies based on the fact that the app has, you know which particular instance it is and if you then know which user is logging in, you can start making decisions about that. But the fact that you can prove that a particular instance of the app is the same one across multiple requests, that is useful. However, what it does not do, it does not prove that it is like really that app, whatever that might mean. So it doesn't prove the app's identity because the way that it got the secret was unauthenticated. Right. So we care about app identities when we are deciding things like token lifetime. So you might say, my website gets, my website which is a confidential client will issue, will give it access tokens that last for 24 hours and will give it refresh tokens that are unlimited. And then when people come to the website and log in, they can just stay logged in forever. But for a single page app that you don't necessarily have the same confidence in and you're always worried about cross-site scripting attacks, you identify that app as a public client and then you say, we don't trust this client as much so we're gonna give it shorter tokens and maybe no refresh tokens or we're gonna rotate the refresh tokens every time they're used or they only last a day or something like that. So it's useful to be able to know what client is getting, is doing the flow and for public clients or credentialed clients, those are clients where you can't actually prove the app's identity. You can just use it as a suggestion that's probably this client. Whereas confidential clients, you do know for sure which client is making that request. Right. And the interaction that you described of kind of a bootstrap of first time launching the app, go grab a, go get a secret, is that something that's covered in OAuth or any of the extensions or is that really outside the spectrum? It's just an implementation detail that certain clients might engage in. It is described in dynamic client registration but it's kind of hard to see that. Let me see if I can pull that up. It's, where is my window, that one. Okay, so dynamic client registration. If you look at the registration request, client registration endpoint, client registration request. So this is the post request with the JSON document with all the client information. It's gonna look something like this, right? So here's a post, here's the client information. It's gonna typically include the client's name, the redirect URLs, maybe how it expects to be able to, it's going to authenticate with the token endpoint, logo URL, whatever, right? There's a list of all these parameters. This is just an example. But here is the sentence that kind of, it's kind of like a backwards way of saying it. Alternatively, if the server supports authorized registration, the developer of the client will be provisioned with an initial access token and then the method by which the initial access token is obtained is out of scope. So it's saying that like by default, there is no authentication of this but you could add authentication to that request if you wanted to. And then you would include an access token of some sort in the registration request. Yeah, and why did my headphones just disconnect? Sorry, computers. Seriously. There we go, okay, now I can hear you. All right. So the response will, so let's see, the response includes optional client secret. And if it is issued, then it must be unique. Must be unique for each client ID and should be unique for multiple instances of a client using the same client ID. So that's like your mobile app example where you have multiple clients. They may all be using the same client ID but they should get back a unique client secret in the case of dynamic registration. So you sort of know that it's that app but it's not guaranteed because there was no authentication to start with. Yep, so I actually had a conversation with Justin about this the other day because someone was wanting to, well, yeah, let's take that example of the mobile app registering. You might, like, client ID is public information, right? And that's the thing that you're gonna have in your OAuth server as like, here's who's allowed to use this app, here's who's allowed, or here's the policies for this app. It's always identified by the client ID. And so you want to put that into the compiled app that you shipped to the app store and then it needs to go and register for a client secret, unique client secret for that particular instance of the app but the OAuth server needs to know which client ID it is in that request. So the app would send its client ID in the registration request saying, this is who I am, please give me a client secret. However, the intent of this according to Justin, one of the editors of this, this Justin, the intent of this was not to allow the client ID in the registration request which is the list of parameters are defined. This is where spec legalese comes into play and I had fun dancing around this with him. The intent was that the registration request looks like something like this where it's got just the name of the client but no client ID. He did not expect people would put a client ID into the registration request. And if you look at it, it says, the request includes client metadata and the authorization server may provision default values. Okay, great. So where does, now I can remember where he pointed me to. Yeah, it's not in here. So there's technically no nothing preventing a client ID from being used in the request even though that was his intent. And if you look at, so the keyword here is sends client metadata, right? It's sending the client metadata values as top-level objects. And if you look at the list of client metadata values, they're listed here. It's in the response section, client ID being one of the things in the response. So he didn't expect client ID to be using the request but there's nothing actually preventing it and it turns out people want to use that. So I was like playing spec lawyer trying to be like technically if you read these paragraphs you can argue that it is okay to put the client ID in the request even if that wasn't the intent. The way that he expected people to do that was software ID which is the, where is that? Here it is, a unique identifier for the software. And then here's where he says unlike client ID which is issued by the authorization server, the software ID should remain the same for all instances of the same software. But that's a whole another concept and like organizational pattern. So if you wanted to like play by the rules you would put the client ID value in this parameter in the request and then the server would return back that client ID in the response. However, you could also argue that there's nothing actually preventing you from sending the client ID in the registration request. Anyway, according to the way that the spec is written if you read, if you squint hard enough. But these are just fun little spec games. Anyway, that's not super important. The point is that the client secret gets returned back in that response and it would be unique to the particular instance of the app and then it can identify that app instance going forward without, so that it could tie policies and other decisions to that particular instance. Make sense. Yeah, so. So what else about two dot one what's coming down the pike here? Yeah, we had some wrong requests on the browser, I'm trying to pull my slides from the meeting. We had a couple of topics we wanted to explicitly bring up to the group. So a lot of it was just sort of status update stuff but we also wanted to specifically say like here's a couple of things that I think are worth discussing that should be clarified in this context and on the record. And those are, we actually only got through two of the three things in the list just because we only had an hour and these things are kind of complicated sometimes. This one is interesting. So this ISS response parameter. The, and now I don't remember where we landed on this. I need to go find the meeting minutes now to remind myself. So I think this is, I think we've talked about this on this show at some point in the past. Does the ISS response parameter sound familiar to you? ISS response parameter. I'm not remembering it but that doesn't. No, okay. We may not have talked about it. It's relatively new. It's technically a whole separate spec like a separate extension. So what's going on is there is continuing the theme of niche mixup attacks. I guess the ISS parameter is to prevent a specific type of mixup attack when a client is configured to use multiple authorization servers. So if you think about a case where a client might be, might have signed with Google, they'll sign in with Facebook, right? Two different authorization servers. So when it gets an authorization response, it needs to know which server to go and exchange an authorization code for or at. And that is, it should be keeping track of the state but of like where it expects to be exchanging that code. There's a, I can't even remember exactly more but there is a, did we freeze? Yeah, you did for me. I don't know if I did for you. I think you're back. Might be a... Yeah, my whole browser locked up. Yeah, you're back. I'm using the M1 Mac Mini and I have to say I feel like everybody's claims of its performance are way blown out of proportion. Like, I like it, it's fine. I wouldn't call it the fastest machine ever. Anyway, so yeah, mix up attacks are always tricky and I can't even remember exactly how to pull this one off. You did a great demo of the authorization code injection attack, which is another type of mix up attacks. And that one is even difficult to like, you have to set so many things up in order to demonstrate it and explain it. Yes, it's easy to demonstrate in a lab environment but in practice it would be pretty tricky. The timing would be very tight. I wouldn't even say it's easy to demonstrate in a lab environment. Like you have to have four windows up on your screen and then carefully explain step by step what's being sent where in order to show how it happens. So this one's kind of the same thing except instead of mixing up the client, swapping the codes that go into the client, you're changing which authorization server the client sends the code to. So it's, they have a name for this, cross social network request forgery. Wow. It's again, it's niche, it's very niche. So anyway, what they're doing is this is the diff that's being added to the security best current practice. So there's this new section here. Basically, it's actually a very simple mechanism that's added. It's just difficult to explain how it, how the attack happens. But there's two ways to do it. Either you can have the issuer or identifier of the server come back in the authorization response. So what that means is normally the redirect URL will have code and state, right? When the authorization server redirects back to the app, it'll include the authorization code and the state parameter and then the client is supposed to check the state parameter. But as with any injection and mix up attack, you could also intercept that request before it hits the client, change out the authorization code which is how your authorization code injection attack works. So in this case, you could trick the client into sending the authorization code to a different authorization server than the one it was intended to be sent to. So the mechanism here is in addition to code and state coming back, we got a new parameter, ISS. And the server says, this is where I'm expecting you to send the authorization code back to. And then the client has to check that and make sure that it's not going, that it doesn't differ from the one that it was expecting to send to. The other way to... The other way to handle it is distinct redirect URLs. So if you have two authorization servers, then you just have a different redirect URL for each one and then you can't get tricked into sending the client the code to the wrong one. So, but this is the reason for the ISS. If clients register once for the use of many issuers, then you can't have different redirect URLs for each issuer. Anyway, the actual mechanism is very simple. It's include the ISS parameter in the response. So it's not a huge amount of code. It's actually even less work than adding pixie, right? It's just one new parameter. But it's a... I gotta learn more about this mixup attack because it sounds like it would be fun to try and create a practical example of like I did for the injection attack. Yeah, yeah, you should totally. Those are always really hard to demonstrate. It's, I don't know, they're just confusing. But the idea is it's a relatively simple mechanism that is being proposed as a solution to it. That document itself, the extension, the ISS extension has already been brought up to the group, already been agreed that it's a good idea to have it as an extension for the cases when a client has multiple or talks to multiple ASs. So the security BCP is gonna be adding it. So then the question is should OAuth 2.1 add it? And this is one of those things that's like, it is new. It's brand new. It's newer than OAuth 2.1, even because it was added even after we started talking about OAuth 2.1. So, and one of the goals of 2.1 was to not add new things, but it's also relatively small scope of the actual mechanism. So that was the item for discussion. And I honestly can't remember. I can't remember where that discussion landed. So I wanna find the meeting minutes. Minutes, yes, here are the minutes from the meeting. I should add this to my site because this is always hard to find on the ITF site. Summary, summary discussion was 65? No, that was a different one, ISS. Oh, right. So Mike was the one pushing back against the idea of adding it because it's new, but he was also like, I wouldn't lose sleep over doing over including it as an addition. Right. So, oh, and because the ISS parameter is in ID tokens already, if you are getting an ID token because you're doing OpenID Connect and you're checking the issue we're there, you don't need it in the authorization response because it's already in the ID token. So, yes, it ended with we're going to include it, but it'll be a should and also just to clarify that it's only needed in certain conditions, basically, is how that ended up. Interesting. That makes sense. And I- The weeds of this. I know. Yeah, security stuff is hard. It's definitely a challenge. But looking over the meeting minutes, I realized there was one that I forgot, which is issue 65, which was something that we noticed. I think it was actually not part of the feedback, but it was something we noticed after we were reading, like re-reading the spec. The spec says this right now. Like OAuth 2.0 says, authorization codes must be short lived and single use. And I know for a fact that there are OAuth servers that do not follow that. There are definitely OAuth servers out there that allow an authorization code to be used multiple times within a small window, because actually implementing single use, like destroy on first use is challenging at scale because you don't always have transaction locks available to handle that accurately. So in large systems, you end up with databases that don't have the transaction lock available to actually remove the authorization code immediately when it's used and prevent a request that comes in half a second later from thinking it's still in the database. And that's why in practice, there are a lot of servers that just don't have the single use limit. I do think everybody properly does short lived because short lived is relatively easy. You put some sort of expiration on the code, 30 seconds or a minute or whatever is way more than enough. But the single use one is not actually often done in practice. So my thought was, if we know people are not doing this in practice, should we update the spec to match what's happening in the real world? Because it seems a little bit silly to have strict requirements in the spec that we know people are not following anyway, right? Which is a tricky position to be in. So that was an interesting discussion. And actually even the OpenID Connect test suite doesn't even require actual one-time use. Proof, yeah, it'll, I think Mike said it fails. It'll let it pass if it's reused within 30 seconds, but it fails after that. So it's just a time-based. And the resolution of that discussion was I would say inconclusive to me. Everybody agreed that there was, that the current wording is not good. Because there's a difference between single use forever and ever, and destroy on first use. So like with a, let's say you have actually the example people were talking about was like the SMS verification codes. Same idea, right? Where it's like six digits. So there's only what, 10 million of them. And we know for a fact that those codes get reused because there are more than 10 million of those verification messages sent out ever, right? So we're not saying authorization codes have to be single use, because that wouldn't be practical to actually require, like if you think about it for the SMS code, if you've got an SMS code, nobody could ever use that verification code again, right? That's not what we're saying. We're not saying the number one, two, three, four, five, six can only be used for one person's verification one time. We're saying destroy on first use. You create a SMS verification code. You let it work for 30 seconds. Make sure that nobody else gets issued that same code within that 30 second window. But after the 30 seconds are up, it's okay if that code gets reused, right? That's what we actually mean for authorization codes also. And that text is technically not in the spec. So everybody agreed that the text should be cleaned up. We're not trying to say that they should be globally unique forever, which is also impractical to implement. But I don't think we really resolved the point about whether to explicitly allow re-use within the time window or not. Currently the spec does not allow re-use, even though people do allow re-use in practice. Yeah. We're going to have to come back to that. Yeah, I'm surprised not more people have brought it up or maybe they have. But I know in workshops that we've done and in trainings that I've done, it's just a bullet point. This is a one-time use code. What does that actually mean? What does it actually mean? Is it actually enforced? Right. And it turns out, no, it's actually not enforced. But the spec is very clear on it. The spec is very explicit on one-time use. If you kind of read into, read through what it's trying to say and burn on first use, not globally unique forever. But it is very clear on that. It's a must. But that's not what people have built in practice and I'm not comfortable with the idea that specs are like a wish list. They're supposed to be reflecting the real world. Right. It may be a little outside the scope of the spec, but have you heard of or did anybody bring up unique approaches or companies that did solve the problem at scale where they really did want to enforce one-time use in a way that didn't result in perpetually expanding database tables of codes that had been previously used or something? No, I didn't see any discussion in that, but this concept isn't unique to OAuth, right? So you can go look at existing literature on this problem for solutions. Actually, the Twitter ID problem is similar where every tweet needs to have a unique identifier. And it turns out generating that many unique identifiers that quickly without risking doubling up is actually challenging because of the volume of tweets. And they wrote up a whole paper on how they do it. I think it's called Snowflake. I guess at some point you get to the sub-nanosecond collision problem. Right. And also time synchronization at the nanosecond level. If you try to use clocks, independently running system clocks to synchronize, you know, as like, oh, well, if we just only generate a certain, within a certain range of numbers within a certain second, then it's fine. But it turns out a second is really long, so you go down to the millisecond, but that's actually really long and clocks don't run in sync. So it ends up being a much more complicated problem. But I didn't realize this, but this is an old blog post. Now, this is 10 years old. And this is, they talk about, Twitter uses MySQL to store the data, and they're working replacing it with Cassandra, but there's no way to generate IDs in Cassandra, nor should it because it is supposed to be, you know, horizontally scalable. So one of their criteria was that they do want the IDs to be roughly sortable so that they should be roughly increasing based on when it was generated. And they need to generate IDs uncoordinated. So this is what they ended up with, timestamp worker number, sequence number, and posted this. There was more details, little details posted somewhere, and now I'm not sure where. But anyway, it was just interesting that, like, generating roughly sequential numbers at a very large scale in a distributed way is challenging, and that's a similar problem as the idea of one-time use or burn-on-first use authorization codes, where you may have multiple physical servers, processes running serving these requests or handling the requests. And if one of those checks to see whether a code has been used, it's going to then go burn it in the storage database, but unless you have a synchronized lock on your database, another worker might be able to, you know, check that it's there and find it still, allowing it to be used multiple times, which was not, is supposed to not allowed, you know, so you have to do something somehow to prevent that, but it's actually very hard. Yeah, that makes sense. These are only problems at scale, because, like, if you just use MySQL or Redis to store your authorization codes for 30 seconds, those end up implementing your locking for you, and it's fine. So, at a small scale, this is not at all a problem, but it's just once you start getting really big. Yeah, that's kind of interesting. I wonder how Redis would perform at scale with its built-in expiration. Super handy for developers to have expiration built into the system, but I wonder at what point you would start to get into trouble with... I think it would get you a real far. I think if you were building an OOS server based on Redis, I think you'd be able to push it very, very far. I think it's just the... Twitter scale before it became a problem. Twitter, Google... Even Okta isn't actually at the scale of Google or Twitter, because Okta is based around everybody getting their own little tenant, and within your tenant, you're not that big, right? Right. You might have millions of users, some customers have millions of users, but you're not that big in the grand scheme of things. It's when you have the one Google OAuth server that is exposed to the outside. It's actually served by hundreds of smaller servers behind the wall, where you have however many billion of Google users there. That's the kind of problem that you end up with. But with Okta, every customer has their own authorization server, effectively, right? In fact, think and have its own storage. It does not have to have anything to do with anybody else's authorization server storage. I don't actually know how it's implemented internally, but except that there are cells, that there are several clusters of machines that are physically separate. Right. At least topologically, you don't need storage between customers to have anything to do with each other. Everybody could be running their own MySQL database behind the scenes, and it would work just the same. So it's actually not a huge scale problem in the same way. Yeah. Yeah, that's a good point. It's a problem that's unique to only a handful of companies on the planet. Twitter, Google. Twitter, Google, Facebook. Yeah. Maybe Microsoft 365 is pretty big, I guess. And it is single tenant. Yeah, it's interesting. Yeah, so that was the authorization code question. And then the other big question that we talked about, big discussion item was around the implicit flows. So the specifically open ID connect implicit flows. So this was the options we laid out for the group of how do we want to reference or mention, not reference. How do we want to mention open ID connect? The main concern here is that we're taking out the implicit flow. That's not in the spec. Everyone's on board with that. Cool. The concern is when people look at the spec and see that there is no implicit flow, are they going to think that that means they also can't use open ID connect's ID token response type when there's also no access token, which that has almost no of the, none of the same downsides that the token response type we're getting access tokens in the front channel has. So we're not saying, like the author is not saying that the response type ID token should be deprecated. That's something that open ID connect defined and they've done security profiling on and is not the business of the OAuth group. What the OAuth group is saying is that response type token is not secure and should not be used. And what that means is that there's actually several different ways to handle that in the spec. We could continue, keep the way it is currently written, which is there is no reference to open ID connect. There is no reference to response type token. It's just gone, meaning if you are a developer and you're reading through the spec, unless you are familiar with how specs work, you may not realize what that means. And that's the argument that people are making is that technically the way specs work is if it's not mentioned, there is no opinion anymore. So somebody could come along, well, open ID connect can still add response type ID token and it doesn't matter that it's not defined or there is no implicit and no author anymore. That's a totally separate issue. Somebody could even come along and redefine response type token if they wanted to as an extension. I wouldn't recommend it and the OAuth group wouldn't adopt it but you can still just do that. You can still extend the spec by adding new response types. So just because it's not in the spec doesn't mean you can't do it. It means that it's not in the spec and we're not saying that it's a good idea and there's no position on it. So we could instead try to be more clear about what we're trying to do, which is saying response type token is deprecated but that doesn't deprecate other response types that might have been created elsewhere. Or we could say it's okay to use response type ID token because it's defined by open ID connect, which is like a back reference because open ID connect is the extension and this would be the core talking about an extension. So there's a couple different ways to handle it, all of which are kind of unusual for a spec to do like that. But the main concern, the reason this was brought up was because of potential for confusion of people reading who don't understand the intricacies of how spec language works. So after a long discussion, after this whole discussion, we ended up saying, where did it go? This is what I proposed, was let's address the case of we don't want to confuse people. But we also don't want to mess with the normative spec language and do weird back references. So if we put it into the little bottom section that says differences from OAuth 2 and then explicitly say it does not deprecate response type ID token in that section, then it's not part of the normative spec. We don't have to mess with any of the spec language or deal with how it's referencing it. But people looking for it will find it and then it's fine. So I think Vittorio is happy with that. He just wanted to be in the document somewhere so that people reading the spec are who we're looking for. Oh, God, I mean, I can't use ID tokens anymore or whatever. They don't get freaked out. Could you imagine a scenario where this is pure speculation but there would be nothing precluding the OIDC working group or whatever they call themselves from saying, well, we like the implicit flow. We're just going to move response type equals token into open ID connect. It's going to be wholly just like ID token as an invention of open ID connect. Now we're going to say response type equals token as part of the OIDC spec. Is there a world where something like that could happen? Absolutely. I would say it's unlikely for it to happen with just response type token. It's more likely to happen with the hybrid ones of response type token ID token because you do need the protection of, at the very least, access token injection, which the ID token can prevent. So because you can put the hash of the access token in the ID token and the ID token is signed, you can then make sure that you're not allowing access tokens to be injected. You need that. So they wouldn't redefine the plain response type token but I could see them explicitly going back and saying, we do want response type token ID token to be allowed even though it's a bad idea. And the reason it's a bad idea is because that only solves the one side of the attack. That only solves the injection attack. It doesn't solve the leakage side of it. And you'll hear some arguments about how FormPost solves the leakage side where it's not going in the URL. It's not in the address bar anymore, right? So it's less likely to be logged in some cases but it's still going through the browser rather than the back channel. So in my opinion, it does not actually solve the leakage attack. It just solves some parts of it. So I still don't think it's a good idea at all, any response type token. It's interesting the way that the two really separate organizations have evolved because I could also, like playing devil's advocate, I could also imagine being on the OIDC working group and being like, it's going to be hard for us to come up with a new version of OpenID Connect if we want to be OAuth 2.1 compliant and still, you know, like if I'm a firm believer that for whatever reasons the implicit flow is just fine, it'll be hard to come up with a new version that's also OAuth 2.1 compliant. Kind of not your problem, but it is interesting to see the way these two groups have evolved. It is, and it's interesting that there's a lot of overlapping people who are in both groups. So a lot of the OAuth members are in OpenID Connect and, you know, work on both. Cool. Well, for the last few minutes, see if we can take some of these questions that have been coming in. Yeah, sure. Okay, Okta, Google IDP access token giving invalidation to ensure any solution. So the... I guess I'm a little bit confused about the question. One, if you're using Okta with a Google IDP, I guess that means that the access token is issued by Okta, right? In which case it should be your resource server, your API that's validating it. And the fact that the signature is invalid has nothing to do with the Google IDP. So the access token exists regardless of what IDP is being used, and you would look for any solutions to why you might be getting invalid signature, regardless of the fact that it's a Google IDP. So... I'll just say as a, you know, kind of a side note, this may not be the particular issue, but it's not uncommon for there to be some confusion as to where the token is coming from when you're using Okta with an external IDP. So you might imagine a scenario where somebody's trying to validate a token using a, you know, a public key issued by Google because it's a, you know, you expect that there's a Google interaction happening. But when Okta's sitting in the middle as like a proxy with these external IDPs, then like you said, you end up with an Okta access token and you would use the Okta key to validate that token. So that is one source of where you might get an invalid signature if you're not using the right key to try to validate the token. Yeah, that's probably what's going on because... Yeah, the thing to keep in mind is that there is... Every OAuth exchange is going to be between a client and an authorization server. And if that authorization server ends up delegating their logins to some other authorization server, that's a new OAuth exchange happening. And that AS is actually acting as a client to the other AS. So in the case of Okta with Google as the backing IDP, you have two OAuth interactions going on. You have your client to Okta where Okta then ends up issuing access tokens to the client. Then separately, you have Okta being a client to Google where Okta might be getting an access token or ID token from Google. And you have to... They don't like cross at all. They're totally separate interactions and you have to treat them totally separately. Basically, your clients would never be aware of what IDP is in the back end except for the fact that the user ID may be passed through or something like that. Okay. How big can the ID token go up to? Is there any spec for Azure ID token? It can hold up to 22 attributes. I don't think I've seen any official limits on the size. It's more just general guidance and that's why some IDPs will have limits like that, like 22 attributes. I have no idea where they came up with 22 from. But the bigger it is, the more things can go wrong with it. Like because it ends up being used in URLs, for example, if you're using the OIDC implicit flow, you end up with the ID token back in the URL and then you end up with browser URL length limits getting involved. And that's one reason to cap the number of claims that can go into it so that people don't just try to put a list of every attribute in a user table or every group the user is in and then you end up with super long ID tokens. Having it be related to like number of attributes seems a little odd too because can I have one attribute that has like a 4K value or something? Right. Let me just put another JSON web token in this attribute. Yeah. Yeah. But generally it's not a good idea to cram a ton of data into the tokens themselves just because longer tokens can cause problems with things. And then for access tokens you end up with like sometimes there's HTTP header length limits configured at various proxies or web servers in the route that it's going through and you can't always even know what those limits are. Is it a bad idea to use different OAuth servers for different projects in a big enterprise? What do you think? That's a fair question. We have definitely seen different customers do different things and it's one of the reasons that I have to support multiple custom authorization servers for API access management. It's the unsatisfying answer if it depends but I think there are cases where it makes sense to have multiple authorization servers. Sometimes you have... I've seen examples where for whatever reason they just need different sets of custom scopes and they just don't want to have one giant long set jammed into one authorization server arbitrarily so they have different authorization servers with different custom scopes for completely different use cases. Things like that. Yeah. Good. Well, cool. Let's go ahead and wrap up here. I want to just remind people about... Oops, wrong one. If you are curious about joining any of the OAuth meetings, feel free to join their list at events.oauth.net. Next Monday we're talking about two specs, two new relatively new specs. So that should be fun. There was a little bit last week, just briefly, those specs that are coming. They're pretty interesting to me. Yeah. Should be fun. And then octadev.events, which is where you'll find these happy hours listed as well as other workshops that we're doing. I just finished last night in Australia time afternoon and second part of my OAuth workshop. That was a lot of fun. And the next talks that are happening is Octane and it is free because it is online, so please come and enjoy these sessions. But we have a whole bunch of developer talks that are happening at Octane. So you'll find the times listed here and you can localize it to your time zone if you go look at this site as well. And we'll be back next week before Octane with another round of this event talking more about... Well, I guess the next OAuth meeting that will have happened by next Thursday. All right. Well, thanks for watching. Thanks for the questions and see you same time next week. Bye, everybody.