 You are muted, it's so 2022 for you to be muted. Lovely, lovely way to start, hold on. Can you hear me now? Oh yeah, absolutely. I can hear you. But from your expression, it looks like you might not be hearing me. Welcome to 2022, folks. You dropped the loud- Those AirPods. All right, can you hear me now? Okay. Maybe, maybe not. Are we here? We are most definitely here. We, this was just working. I don't know, my setup didn't change. There we go. Okay. Okay. Ah, perfect. I can hear you now. Well, that was a lovely way to start. We were, the way this works is we're sitting here in StreamYard and before the show starts, we're in StreamYard talking to each other. Everything is fine. You know, I'm wearing headphones. I have my mic set up. It's the exact same setup that we used to run the show. And then as soon as the countdown got to 10 seconds, my screen just froze and I lost all the audio and I couldn't see anything that was going on. And then I heard a little, I heard you make a little rustling sound. And I was like, oh no, does that mean we're live? And I can't see that we're live. So yeah, I think my little AirPods are not doing so great lately. That's, I'm gonna blame them for this. Or maybe it's like, you know, saw a lot of flares and EMU pulse and you had to keep them into reacted. Yes, exactly. Well, lovely way to start the show. Also, it is pouring rain in Portland right now, like pouring, pouring. And I had to take my bicycle to the shop to get fixed. So I biked there in the rain. I took an e-bike up here in the rain and I was just drenched by the time I got here. So all my clothes are in the dryer and I had to, I had to change my shirt. So now I am, yeah. Anyway, okay. You are lucky that you don't have your blanket on your head. Otherwise, once you get wet, you're really like, it takes a long time to dry out. I can only imagine. Yeah, this just sort of fluffs up and it's dry. Nice. Okay. Well, welcome. Thanks for joining. Hopefully we haven't lost everybody already, but hi, welcome to another round of the OAuth Happy Hour. All right. Yes, we did not have enough time last week to finish what we were talking about because it turns out trying to recap that much was just taking a lot longer than we expected. So we are going to cover the second part today. We're gonna cover the second OAuth session that happened at the IETF. There was a handful of topics discussed the second half of that, and also really interesting stuff. So that's what we're gonna focus on today. And because we're about to have a whole another series of events around OAuth and Obini Connect and we don't wanna get too far behind in catching up with what's been going on in the world. All right. So let's start by, I'm going to pull up the event page so we can take a look at what's going on. Hopefully my screen share works today. We're gonna share this one. Lovely. And what? So IETF 113. Oh, I'm on the wrong one. Need to be here where I put all the details. Session two. So this was the agenda of what we talked about at the part two, day two of the official meeting after two side meetings. So the side meetings are the official unofficial meetings of the group. They are scheduled like on the, in a room and they're planned at least a couple of days in advance but they're not admitted and they're not part of the official proceedings but it's a good time to get a bunch of people in the room again to have more discussions. And that second session, which was the day before this session, this got added to the official agenda based on a lot of the discussions that happened in that side meeting. And so yeah, the side meeting number two was on the 23rd. This was at 6pm. And I unfortunately could not go to that because I had just realized that I was coughing a little bit more than I was comfortable with and went and got a COVID test at the Test Straße in Vienna. And it turns out I had COVID. So I was stuck in quarantine for five days. So Wednesday night, right before this meeting, I was then locked in my room, could not leave until Sunday. And I missed the side meeting which I was very disappointed about because it was a lot of good discussions happened in that. Yeah. During that meeting, the news broke that you tested positive. Yes. And basically half of the room was still talking, half of the room was walking out to be tested because there was a good testing facility outside. And in particular, the evening before we had the dinner all together in a place which had no air circulation. And you basically sat on my lap for the entire dinner. Like we were sitting very close. So as soon as they said, okay, Aaron tested positive, I went, oh, I am a beep. And so I just ran out. And the Austrian people are not particularly delicate when they do a test. It was like these ladies had to look like so nice and petty and gentle. And she pulled out of his finger like a weapon and she told me that my nose was like, no, stay still. It was a quite the experience. Luckily, it turns out that I didn't get it for some kind of miracle, but you caused quite a bit of a stir. It was like a very unusual, like I don't think it ever happened and ITF anything like that. It was a kind of fun. Figures, the first in-person meeting back again and something like that was bound to happen. So I was officially number three of three cases at the conference. That was the official count. I suspect there were probably more cases that just didn't ever get reported or surfaced because only the US people had to test to go back home anyway. So a lot of people were going back to Europe where there's, they can just waltz back home, whatever. So yeah. It's once like the test stuff might be dropped. I don't know if you heard that yesterday they announced the end of a federal mandate for masks and they literally announced it mid-flight and a number of pilots just said, take off your mask if you want to. Yeah. And it was a pretty crazy, but anyway, that's not true. I found a picture of the place we had dinner. This was the restaurant. It kind of go down to this little cave and then this was actually from lunch on the first day but this was, we went back for dinner on Tuesday night and it was packed. It was just completely packed. And honestly, I don't know how everybody in the OAuth group ended up not getting a positive test after that because I clearly had it on Tuesday night and was sitting there eating with everybody else and had no clue, no idea until Wednesday. So anyway, yeah. But anyway, and now you are okay, right? No, I'm fine. It wasn't too bad. I've been vaccinated all the doses and everything. I was wearing masks the whole time so I think everything was pretty well under as much control as we can have right now. And I quarantined for five days until Sunday. I literally couldn't leave my room of the hotel. So I was having food brought up. I was doing PCR tests every day by putting a little box outside the door and someone would come and pick them up and run them down to the get processed and by Saturday, I actually started getting negative tests again. So it wasn't too bad. And I couldn't leave until Thursday just because of the logistics of getting flights and timing of getting the negative test result in order to check into the flight and all that stuff. It was a whole thing, the whole thing. We're not gonna get into that right now. But I'm fine now and yeah. So yeah, the side meaning anyway, that was all just because I was really really disappointed to miss this lovely discussion. And then for the next official meeting, there was official remote participation because all of the sessions had full remote participation since there was a really reduced in-person number. So there were a lot of people joining remotely, of course. So I was sitting up in my hotel room and calling into the meeting from there and I was able to actually join some of these discussions and actually participate here. So that was nice. Yeah, so let's jump in to topic number one, the device code flow. Yeah, that one was a very interesting session which echoed a similar session that Peter did at the off security workshop last year which was for the virtual. And basically the main thrust of it was Peter discussing the fact that the device flow and not just device flow, but pretty much any flow that we have today in which the requesting device and authentication device are two distinct entities are open to a number of attacks which are just like intrinsic in the model and largely phishing and similar. And he demonstrated a number of those and then Daniel Fett came at the stage and he did demonstrate one which I think was a website which is like another multi-device. And then someone else, yeah, Philip. Philip Skokkan also demonstrated one attack that can happen. And basically the idea was to inform people that this is the case and then just ask what should we do about it where the range of interventions go from warning people that this stuff has intrinsic issues along like along the same lines as what we have done for many years for the implicit flow in which implicit flow we knew it wasn't great. And so we said, well, only gaming town for the state of technology so far but be careful about all of those constraints all the way to potentially telling people this is very bad, you shouldn't do it. We didn't go all the way there and we did mention that there are a number of new approaches and technologies that are coming up that might make these better or actually make some of those flows unnecessary. And in particular, the multi-device credentials that Fido is introducing what was known as PASCII in the past was mentioned as one technique which uses cable which stands for a cloud-assisted low-energy Bluetooth and basically is a way of using one authenticating device just like you would use a UB key with another device. And at that point the connection occurs locally because it's Bluetooth but the moment in which we established that both devices are on the same space then you just use the internet connection of both devices to exchange. So very number of discussion I think that if a follow-up is that Peter will drive discussions and finding language to add, I don't think to the BCP for what we said last week because it's done, but maybe to some other spec and also collaborating with both Fido people to see whether there is some guidance in here. That is the extent of my recollection. That sounds right, yeah. And I realize we should have probably described the most common use of the device flow that you're probably familiar with which is signing into an Apple TV or Amazon Roku or whatever those kinds of devices where you don't have a browser on that device. So you can't, and you don't have a keyboard so you can't really type well on those screens. So instead of logging in on the TV itself and typing stuff in there, you instead bounce out to a separate device like your phone, do your login there and it sort of figures out how to get the tokens eventually back to the device you're logging in on. I don't know where Vittorio went. Oh, there he is. Vittorio went to cough because the water went in the wrong direction and I didn't want to give you, they seen of me looking distressed, so. We are on a roll today, we're professionals. Yeah, absolutely, like host 101. One thing that it was very close to my heart for this entire thing was that just the week before we released a, we as in of zero labs, we released an experimental SDK for doing authentication inside the, in the cloud. Sorry, in VR, in virtual reality. And this thing was what is based on the device flow. And so the idea is if you have a unity environment and a 3D app, you can just drag and drop these dialogue, configure it with client ID and the domain that you want to use for your authorization server. And this thing basically does device flow for you. And the experience isn't great because it requires you to use an external device. And if you have your headset on, you need to lift it, do stuff and then put it back on, which is not good. It is better than the current practice, which is to type the password using a remote and using a laser pointer on a floating keyboard, password that you probably don't even know. So, but the point is that was in, it is an SDK designed to stimulate those conversations and showing in practice the challenges that we have for authenticating today in immersive stuff. And one of the things that I was hoping to achieve in the context of the ITF was to start the conversation about how, for example, to improve the device flow so that at least the transmission between the device, in this case, the VR headset and the authenticating device, which would be the phone, if it could be done like, for example, over local Bluetooth so that the user doesn't have to do a lot of typing. And in the best case, you just say, okay, to your biometrics and then you are back in your immersive experience. And I had some discussions with Peter out of band before his session. And I also mentioned it during the session, but it looks like right now the idea is either we go toward the web often side of things, or we try to do something completely different, as in maybe the device flow, the device grant was a good way of dealing with smart TVs and similar, but given all the constraints that are coming up and all the attacks that we've seen are possible with this approach, maybe it's time to look at these with fresh eyes and do something completely different. And so again, I would expect Peter to drive some of the work and I'm eager to participate because I'm a big fan of VR and I really do want to be able to authenticate soon. And as soon as we'll have AR glasses, not just me, but everyone will want to be able to authenticate. So we see where things go. Cool, yeah, I like it. I think that's definitely a good or a important use case that is not well solved by pretty much anything at the moment. And it does need to get better for that for sure. So I will be eagerly following that work for sure. Although I don't have VR glasses myself. I'm probably gonna wait until I can get them on my regular glasses. Well, I am, one is here, it's with the volume device. And what do I have? And this one instead is the PC one. Ah, that, yeah. Not everybody knows, but in 99, like they didn't study identity. I studied computational geometry. And in 99, I was doing VR for like industrial applications with much clunker stuff, which was way pricier. And it just stumbled in identity by accident. And now I'm stuck. I can't remember because but it was doing, so that's why I'm still, at least in my free time, I like to double with that stuff. Yeah. Heather's also a huge VR fan. Yes, we've had some conversations about the device flow too and agree that it's. Very nice. Yeah, okay, cool. So that was a good summary I think of the presentation. Yeah, it is true that like anytime you're splitting up the login between two devices, if you're asking the user to do something to connect them, then the user can be tricked into connecting two devices if they don't necessarily have one of them because you're asking the user to do something. So in the TV example, this is when you see the TV says like, go to this link, enter this code, and then you'll be logging on the TV. And if the user is somehow tricked into giving that code to somebody else, then they can log in the other account would log into the TV, which doesn't sound bad in the TV case. Cool, I get someone else's cable subscription, but it has implications that are much more significant for other applications of the device flow, so. Yeah, and also like there, you can generate these from a different device and then just like, like the thing is that if it's not the user which is entering the stuff, but in many systems, they allow people to use push notifications or SMSes. And then at the point, people might just get these prompt and think that it's something like, I don't know, a very computer that timed out or a very device that timed out. And so they might just enter the code and even if they weren't directed prompted by the device. And so now the attacker gets stuff. So it's stuff. Whenever we rely on user judgment, we open ourselves up to the user being busy or distracted or naive. And so unfortunately, those things can happen and do happen. Yeah, so I think like we've, this has been a problem with MFA as well, same idea, but at least with MFA, it's like you have to also compromise the password in order to have this like really fall apart. But for password resets, you see this too of, it's anytime you've sent a code to a user somehow and then there's just so many ways to trick them into sending the code somewhere else so that you can bypass MFA or do a password reset over whatever it is. It's, it's a mess. So and then in the MFA side, we've seen a huge improvement of that with WebAuthn and the Fido world of things where it's all then done in hardware so that there is no possible way for the user to be tricked into doing anything wrong because it's all actually hard coded and it just, there isn't a way for them to be tricked. So that is, we need that, but for the device essentially. Yep, observed. Cool, well, do you want to move on to number two, step up authentication, which is- Who is this Victoria Parson? Who is, who is that? No idea. Well, this one was again fun. As we mentioned, I think a couple of happy hours ago with Ryan Campbell, we wrote a proposal for making the step up flow interoperable in the context of off. It's funny how so many people when we tweeted and talked about it, replied like, wait, it's not standardized yet because like it's something that is so common that people just expect it to happen, to have already happened, but it didn't. In fact, like the, and also the confusing part is that on one side of the reason they experience of a step up authentication, which sometimes you can just establish static, like say that you have different resources and you have all different scopes and depending on those scopes, you have associated some requirements for authentication. And so the step up can naturally occur without anything, without having to create any standards in the scenario in which you can, which you are just asking for two tokens, we're asking for different scopes and those two different scopes have different requirements. So everything happens, go ahead. In order for that to work, the client has to know that it needs a certain scope to do a certain operation, right? Yes, yes. So that's an important part of that. Like it does work, it's definitely possible to do. You don't need a standard for it because you can just create all these policies in your OAuth server either, if you're hard coding them by hand, if you're writing your own OAuth server, you can just do it. If you're using like Octo or Auth0, it's all policy configuration stuff, you can do it. But the client has to know when I make this API call, it needs this scope and then it knows that a certain token may not have that scope yet, so it has to go get anywhere. And then I actually talk about this a lot in the workshops that I do as a use case, but it is something that has to be planned, very much planned out ahead of time. Yeah, and it has to be planned and the client needs to make a lot of active participation into this, as in the client needs to know when to ask what and needs to know when some of those things might result in interactivity or not. But the part is that some of these actually doesn't help for a number of scenarios in which instead the determination of whether you should do something else, you should like authenticate in a different way is not determined statically. Like today we have all these discussions about continuous authentication and risk management engines that at every call, make a determination and I think was a zero trust continuous off and says, okay, with all the things that I know about the context of this call, am I okay with the authentication level? And that determination is not dependent depending on scopes at all. And the other thing is that even without going all the way there, we always had authorization requirements that are tied to the body of a particular request rather than the nature of a resource. So for example, if you are trying to buy something and the cost of the thing you're trying to buy is beyond certain threshold, you might trigger some logic that says, okay, now I need you to authenticate more strongly. And the thing is potentially you could, so you have two problems. One is today there is no standardized mechanism for the resource which is rejecting the token to tell to the client, this is why I'm rejecting it and telling to the client how to remediate. And that's like one problem. Potentially this problem could have been solved by doing a lot of scopes math as in like creating specialized scopes and then sending back from the PI to the client, the scopes that you need. But this thing is very brittle. It takes a lot of work and in fact, basically no one is doing it. Like the mechanism for doing this was baked in there from the core specs since the beginning, but no one has ever done it. While conversely, many of the actions that we expect the client to take, like for example, ask for a particular ACR or AMR or ask for a fresher token because you got a certain max agent requirement. Most things are already out there supported by a physician servers because they are defined in open ID. And so basically the suggestion that we are making with Brian is to extend the error set that all of the fines to have one that basically says insufficient authentication and to add to that message back in the challenge, details that can tell the client what to ask to a physician server in order to meet the requirements. So we have an ACR underscore value and we have a max age that can be embedded in this challenge and so the client can turn around use those to get a new token. And then once they get it back just play it out and finally meet the requirement. It's very, very simple, but it's missing. Let's say that it's simple, but if you don't have that exact syntax then we can't do it in interoperable fashion. And a number of people are tackling these in proprietary ways as usual because it is a real problem that people need to deal with. But whenever we spoke about these in public people were very enthusiastic about it, like saying, yes, we need something like this. And so we presented these at the ITF and we got some basic questions which I believe are all answered by the current draft. So now the next step is we nominated two or three official reviewers that we did a review, they spec a member report on the list. And then depending on that review we'll go and see whether this is something that the working group wants to adopt or if we want to do something else. That's a summary. That is a good summary. Yeah, there's a lot there. I think it's a, I think it's interesting that there hasn't yet been a convention. I don't want to say standard because that's like a very loaded term, but like a convention that's sort of naturally emerged to do this kind of thing. And it's like, for example, using the mechanisms that were already there people just didn't do it that way. So I guess that means that was not a good way to do it. So, yeah. Yeah, hitting the nail on the head because like the main mechanisms that are out there are the scopes challenge that can be sent back, but scopes are really designed to do something else. They're not like, sometimes you associate authentication levels to scopes but they aren't pure authentication level representations and the codifying it in that way is just not very natural. And so, although technically possible, few people do it. The other mechanism is the one that OpenID introduced and it is the claims parameter in the request. Like basically in OpenID you can include a little JSON which says claims and the list in number of claim types that you want. And sometimes you can always say, and you also say, and I would like to have this claim with this value. And so technically you could use that as a way of trying to enforce some of these. But again, it's a mess. One, it's only about ID tokens. You could extend it to access tokens but it's unclear how to do it. And second, it's just too open-ended. This thing is like a generic JSON and any claim that you put in there point to different logic that needs to run on your authorization server. And so saying I support the claim parameter it's a very bold claim, not pun intended from an authorization server because you don't really know what that means. It's just a format for transporting this stuff but then the logic to process this thing is different which is why in the approach that we are doing with Brian we are trying to keep it very simple but also very specific. As in, instead of having a genera envelope in which you can show whatever you want which might, I would say, make you feel like you are future proof because well it's just a bug I can put whatever but then you never know whether the authorization server actually implements this stuff and how then we had just reduced these to a couple of parameters so that it is very clear that the authorization server will be able to cope with that. And the good thing is that those parameters are already defined in OpenID as standalone parameters. So many authorization servers already supported they just need to extend it to access tokens. And then we have a small extension to the authorization server metadata where you can declare that you support it. So our hope is that we will defeat these two general mechanism and actually drive some adoption by being very specific. Yeah, I like it, I like it. Well, and this is, I'm glad that there's an actual draft here like a real proposal on the table now. It's that's a good stage to be in for trying to solve this. So I guess thanks for writing that because that was your draft. So yeah, I think that's a great place to start the conversation. So I will drop a link to that in the chat because anybody else is curious. This is a good place to start reading about it. There we go. All right. Yeah, all right, cool. Let's move on because we still have two more things that were on the agenda. So next up was Daniel Fett talking about OAuth libraries. So his, there's no draft here. There was a mailing list discussion and the mailing list discussion was, here was his summary, subject line, the frustrating lack of good libraries. So here's the summary of some of the problems with existing library support out there. Not maintained. I've definitely seen that. Not well documented. Writing docs is hard. Yeah. They only support a subset of the OAuth specification. I definitely agree and don't blame them for that because the OAuth spec is actually quite large once you start counting all of the extensions especially. And I think this next one, they only work with selected providers is very, very common because I think what we've seen in the way we've seen OAuth be used is a provider will use it for their API and then they will publish a library or people will write libraries for a particular API because they want to use the Facebook API. And then that client ends up getting hard coded to that including all of the assumptions and limits that that provider or things that provider has chosen that may not be actually part of the spec. So there's a lot of like optionality and you can do things five different ways but they'll do things one way and then the library gets written for that only that one way. And they're not written, a lot of them are not meant to be generic OAuth clients. And even if you wanted to write something that was a generic OAuth client, it's often quite a large undertaking. Speaking of which it is unclear whether they follow the recent security recommendations also true because documentation is hard. And yeah, some of them don't support modern features like Pixie, authorizations of her metadata, MTLS, et cetera, et cetera. I do like his calling out the exception like Phillips node implementation, which is very nice. But yeah, for the most part it is quite a mess out there. So that was his main concern I guess was, is there a, is this a problem? Is this a thing that should be solved? If so, how to solve it? So, yeah, what happened with that discussion? It was... It was a pretty heated, he heated because well, personally I felt really pulled in because as you know, doing SDKs has been a very sizable chunk of my career SDKs for identity. And one of the things that I mentioned almost right away and that a lot of people in the room agree is that very rarely the SDKs out there have as a goal to implement OAuth. Normally the SDKs are designed to facilitate whatever task the developer is trying to achieve which is a higher level task. And good libraries are layered so that they started with a layer which is a faithful implementation over the specification. But then as you add layers like the surface that you offer to the developer is typically closer to the task that we want to afford. So sign in, sign up, log out rather than or get token for API rather than authorization code with pixie for public client parentheses. And in fact, like the pure OAuth libraries that you see out there, I think the quintessential example is OAuth which came up mostly from Google and ours when they were working with the native client best practices and there was a push toward using system browsers. And so to help people, they made the libraries. But the thing is that Google themselves, they forked that library to have a Google-specific version of OAuth mostly because the normal developer doesn't want to saviour the experience of doing all the various shades of OAuth. It's that mostly they want to get the job done. And so there is a bit of mismatch between lower level and things like I'm supporting every single flavor of OAuth or I'm supporting the latest security extensions and similar and getting the job done. And then there is also the enormous variety of roles and scenarios you can play. Like during the discussion, they pulled in both things that you do in front of a resource to protect the resource. So things like intercept access tokens and validate them or intercept calls to web app and perform the open ID sign in dance or on the client get the token so I can call an API. And the difference between those as the case is stark both in terms of what you need to offer. Like in one case, you catch tokens and the other you don't. In one case, you are multi-users and the other is single user. And the ability to stick with a standard is different. Like as you were saying earlier in application SDK that is a client of one particular provider will have reflected in the code base quirks of that particular thing, which are not covered in the standard. Whereas things like stuff that you place in front of an API for intercepting an access token then at that point introspection or generability access token are both standards and so you can be a bit more standard. So long story short, the problem space is huge. The developers are often not incentivized to actually present things as implementations of a standard but more as helpers to get things done. One thing that I loved of a discussion was that one proposal was to draw from the experience of the OpenID Connect folks with certification. Like technically you could say, you know what? Do whatever, like do your SDK for your own reasons. But here there is a battery of tests that you can do to prove in a way of representing the results that certifies that these particular SDK supports those particular flows and has these particular. That one I really liked. It was a good outcome I think. Yeah, I think that's, OpenID has done a good job with that because there is a list of certified OpenID implementations and you can be reasonably certain that if it's in that list, it does all of the things described like these things. It's, that's the point of it. And that does not exist for OAuth because the organization works totally differently. Like OpenID is a foundation. There is a lot of membership money being paid by companies into the organization that can be used for running programs like developing a test suite or a certification program. And that doesn't exist in OAuth. There is no single entity that sort of runs it. It's ITF and the loose collective that that is. So that's, it's something to consider, but there would have to be someone who would have to like step up and essentially sponsor that work and then also agree to maintain it in the future, which is a lot. And I don't know. And then on the flip side, I think a lot of the time people are using OAuth, they are using it with OpenID Connect also. So actually using an OpenID Connect library gets you pretty much all the way there anyway, because those are, those are the certified libraries and because they're OpenID Connect, they often have a lot of the OAuth stuff in them also because it's built on top of OAuth. So it's a, it's a tough one. Again, once Mova here would be to convince the OpenID folks to have a flavor of this certification, which is like deep of something like that. And then at the point, they can, and I'm saying this as Victoria, not as one of the board of directors of OpenID Foundation, but if people tweet about it and similar, I might bring it up at the next meeting and see what happens. There's a couple of good points here in the chat. So the problem with OAuth, with all the extensions is conflict with OpenID Connect, like the implicit flow being deprecated, but the hybrid flow relies on it, but then at the same time ID token is okay. And then also the password flow is deprecated, but it exists in the original. Yeah, I mean, this is, this is also very much related to the work I'm trying to do with OAuth 2.1, which is to clean this stuff up because yes, the password flow is deprecated by the security BCP, but if you don't read it, you wouldn't know that. So that's why I'm trying to collapse everything. Maybe one of the things that ties into this library discussion is that it may be easier to have the conversation about what an OAuth library should do once OAuth 2.1 is done. And that is better defined as here's a collection of things that a library should be doing is OAuth 2.1. And because that rolls up a lot of the best practices that you would want to see in a library. But if you were to today write a library that supports 6249, it's gonna have a lot of problems and a lot of extension points that are meaningless or some of them maybe get, maybe. So yeah. In particular for the comment on the fact that there is a conflict between the deprecation of implicit flow and the use of a hybrid flow on OpenID. I seem to remember that the specific language has been added in OAuth 2.1 because someone was pestering their offers with the fact that this is a frequent point of confusion. And thank you. Here, the main point is there is no conflict. Let's say that the thing that there is. Removal of the implicit grant. So this is non normative, but this is trying to clarify what's happening. The intent is to no longer issue access tokens in the authorization response. This was indicated by clients using response type token. This is no longer defined. Removal of this does not have an effect on extensions returning other things from the authorization point. For example, response type ID token. Perfect, wonderful. So that hopefully clarifies the fact that the deprecation or actually like that we no longer talk about implicit flow into that one doesn't really affect the fact that it's perfectly fine to get an ID token from the front channel in OpenID. It's just a very unfortunate thing. I wish that we could just rename things so that we wouldn't run into these a particular issue. The language there, it's super useful because now whenever someone brings this up, I reply with a URL pointing to that particular section and that helps. But ideally it would be great if a question would not even arise, but I want to ask too much to my good luck. I'm very happy that the language is there and so I will just use it to clarify stuff all the time. But I guess the other part of this is that it is intended that the particular hybrid mode in OpenID Connect of response type token ID token is deprecated, that is the intent because response type token is not a good idea and you can't solve it with OpenID Connect. Correct. Response type ID token while OpenID Connect calls that the implicit flow, that's not the implicit flow that OAuth is removing. That's a whole different thing. Yeah, basically we are like, the misdomer is that the implicit flow in core of only refers to the access token because there is nothing else. And so deprecating that is because we want to avoid placing the token in the front channel. That's bad. But the fact that there are other artifacts that have been introduced by others, that have different security characteristics and that they don't have the same issues, like here we are not deprecating the front channel altogether. We are deprecating the front channel for access tokens. So all other uses still, that doesn't mean that everything else is okay, but in the particular case of ID token in the hybrid, we do have a stop at shows that it's okay. Cool, so. And I think the next steps for these were meant to be having a huge table which defines the various use cases that can be covered by an SDK. And then we can start reasoning from there, given that it's, I predict that if we do something about it, it will have to be scoped. Let's say that we probably cannot do all the possible combinations and it will either focus on the clients or the resources or some specific subset. But whatever we do, the scope needs to be better defined than just saying SDKs. Yep. And I did offer to update the OAuth.net website with whatever might be helpful for this discussion. So right now it links out to client libraries and a bunch of languages, but these libraries have all the problems described on this thread of some of them are just out of date. Some of them don't do all of the bits and pieces, whatever it is. So this is a very nice collection of libraries and it is maintained, the list is maintained. Like if you scroll down here and you wanna add one yourself, please do. Cause there's a link down here to the GitHub source for this page and you can just go add stuff. And I get pull requests all the time of people both like removing things that are dead. They're just like totally dead or adding new ones. And that's kind of the best we have right now. And I'm happy to do work here to make this more useful somehow if we can agree on what that would look like. Yeah. And on the validation side, if you go on the jwt.io you'll have a list of many of the well-known, many of the well-known libraries. Like if you see on top of it, it says debugger libraries. In the top of it. Yeah. If you click on libraries you'll see that you have different libraries and for each of them, what features they support. And this is like more of a particular sliver of what you often need to do when you're working with off, but that helps. And if you go on the openid.net right now I wouldn't know exactly where to point but they have a list of libraries I think like inspects and dev info. Yeah. There are a list of implementations for running parties. So it's not strictly off about the adjacent and the advantages that they once were done in here are actually certified. So one of the problems that Daniel brought up about unknown quality here if they are listed in here it will also tell you what certification we went through. So it gives us some degree of extra insight. That's the conformance profiles, right? So these are the different sort of like test suites that they go through. So if they've passed the whole private key job that means that anything in that suite has been passed. So you know that it can at least it's like a description of a collection of features I guess. Correct. Yes. So yeah, something like that could definitely be useful for OAuth but that is a lot of work to create. I've created test suites like this before for other specs and it's a lot of work and one of the other problems is it's writing code which means you can also create bugs in the test suite. So you have to make sure that you're not you have to make sure that your test suite is correct. So yeah, it's a lot of work. Cool, okay. We are almost at the top of the hour. So I do wanna make sure we have time for this last one that was on the list because this was really interesting too. It was a short discussion in the official meeting because it was part, it was sort of reporting back about the discussion that happened the night before which I was not at unfortunately. So I only got to hear the summary of it at the end. So do you wanna fill us in more about the more of the context of the conversation that took place on Wednesday? I think that the main, the first of these was some people thought that asking for Pixie everywhere was unnecessary because like existing implementations that use the nonce might be covered. Like as we said earlier, there are so many implementations that are both off and open ID. And open ID did introduce the nonce which is a good mechanism for preventing fraud, jury and similar. But I think that the main point was that there are a number of attacks that nonce won't protect you from. And so they ask to support Pixie across the board. Actually, a couple of steps back. Originally Pixie was introduced as a mechanism for protecting the, against many of the middle for code exchanges for public clients. Because given that the use of the code wasn't protected by an application credential and especially the moment in which application started using a system browser. And so there was a communication between apps in which the code was exchanged. Then people worried about this code being potentially intercepted by another app and used to get a token instead of the original request. And so Pixie was introduced to protect against that. And then it turned out that there were a number of other attacks which could also occur with confidential clients and that Pixie protects from them as well. And so we ended up having the language in the BCP and then into the one which basically say if you're doing authorization code, no matter what's the flavor of your client, you've got to use Pixie. But the thing is that we missed explaining why was the case to people. We just added the constraint. And so some people in the community weren't clear, like even like in the standard community, weren't clear on the fact that there were indeed attacks that can be performed that nonce want to protect you against and Pixie instead does. And so we had to clarify it with them and then by extension we are thinking others reading the spec might just think that you are being draconian for no reason. And so we need a bit of language in the BCP that says, hey, here there's a pointer to the other attacks that you think you might be protected with if you did not, but you aren't, which is why we are now mandating Pixie everywhere. And so largely the discussion was getting people on board with yes, those attacks do exist. And the other was like the very wary people that have been working on the security BCP for a long time, they just want to be done. And so now yet again, more language to be added. And so there was a bit of like going through with four stages of grief of like, yes, we do need it to add more language. And now I think that we do have agreed upon language which helps with this aspect. Again, my jet lag itself that retained only this amount of information. If there was something else, sorry, I don't remember. Yeah, yeah. And this was an issue very, very near and dear to my heart. And that's why I was disappointed to miss the Wednesday discussion. So I look forward to hearing more about it next two weeks at the OAuth security workshop when we're all back together and chatting about the stuff again. Yeah, all right. So yeah, let's wrap up with just a quick rundown of what's coming in the future since we are almost at the top of the hour or half hour in on next week. There's this opening foundation workshop. I will not be there. But after that is the identity workshop also in California nearby, you're going to IIW, right? I'm going to IIW and I'm also going to the OpenIDF Foundation workshop. And at IIW, I'm going to present the session that you usually present at IIW, but you won't be there. They are all off one-on-one. So we'll see how it goes. First time in person in a long time. So very curious to see how that will play out. Yep, and then after that we will both be at the OAuth security workshop in Trondheim. Really looking forward to that. That is not an IETF event, but it is a lot of the people in the community who come to the IETF events also come to this and a little bit more abroad too and talk about interesting new security related topics around OAuth. So it gets into super detailed into the weeds of a lot of the stuff in these sessions. So I'm excited for that. And yeah, that'll take us through May and then we'll hopefully report back here on what happened at those events. And yeah, see how it goes, see how it goes. All right. Well, I think that's it for today. We didn't, I saw there was a question here earlier which is a great question, which we'll have to save for another time. The GitHub OAuth issues. That deserves its own dedicated time because that's a lot going on there. So maybe we'll have some more information coming out about that that we can report back on too once things are a little bit more finished being investigated and reported. So yeah, all right. Perfect. Thanks everybody. Sorry for all the technical issues at the beginning, but we did it. We did it. And we will see you again next time. Perfect. Thank you, bye.