 Welcome to another round of the OAuth happy hour. I'm Aaron Perrecchi and I'm joined here with Vittorio Hello It's been a while since we've been here lots of interesting stuff has happened in the meantime, so sure we've got a lot to catch up on today Yeah But as always we will cover we'll talk about some of the new things that have happened in the world of OAuth some of the new specs that are Working their way through the standards process and there's been a couple of recent events as well that we were both attending and participated in OAuth security workshop and a couple of the interim meetings from the OAuth working group, so We will chat about some of the new things that have happened in those sessions as well If you are watching live feel free to ask questions in the chat and we'll see if we can answer some questions as well Yeah Have you not had enough coffee yet today? I did take a decent amount But apparently I'm having a hard time and metabolizing it, but hey, I'm here by a big light I'm definitely awake Well fantastic With that so what event should we I think that the events to give us a good the backbone for Slotting you know the various things there and yeah, I have to admit you have been a bit negligent with Innering and so I didn't follow all of them, but you did attend all of them, right? I Think I did actually I think I did Let's see. Let's go back to the back to the First the last time we've we chatted which was the last OAuth happy hour would have been October 7th, right? Yeah, it's pretty crazy two months Yeah, it's been two months. It's a lot. That's a lot and in the in the meantime We've had a handful of other events happening. So That was Right after that's right. I remember chatting about the HTTP signature spec we had a we had our our Yeah stream right after that interim meeting I remember yeah So yeah, since then we've had a lot to point one rich authorization requests d-pop and Token exchange and then the OAuth security workshop So lots lots going on there Yeah, although token exchange might be a bit misleading Talking exchange is done. It's an NFC and everything then there was someone that presented in possible use of that particular Of token exchange for that particular scenario, but it was Now like my collection is a bit foggy, but I seem to remember it was a very very very specific and That basically no action was taking while I work in droop like basically they wanted the feedback And I think that they they also repeated that at the Security workshop. They did. I remember that. Yeah Yeah, I did not have time to that because like after I've seen that Here it was interesting, but I don't think it was a generalizable. It was like a very specific Circumstance and scenario and yeah, it was an interesting application of a spec But not something that would become a spec in itself at least that that was the consensus That I believe Matches my recollection of that as well. I will I will drop a link to this in the chat though and in case See if it's gonna show up on the screen. Yeah In case anybody is curious about the details, there is a recording of the presentation at the From the interim meeting and I guess there's a recording of the security workshop session as well Oh, yeah videos are all online now. So, yeah Definitely can go and dig into it If you're curious, but yeah as far as I remember the the group decided to not do anything Directly with with that Yeah, yeah So what else where there wasn't your to the one like you presented the update on off to the one Yes. Yeah, so all two dot one we Also recorded and I embedded the video here the Presentation that I did was mostly like a status update of What's changed since the last time we met that's how I usually usually start and plan changes that are still in progress A lot of it is from from you so No pointing fingers. No point. It's very very helpful But we did have a couple of issues I wanted to discuss with the group while everybody was there and These spun out some pretty long discussions so Couple of these were pretty straightforward This one Everyone was like cool. This sounds good. Let's add that paragraph and it helps clarify things Yeah, and it just expanded for the audience This one was another one that I felt very strongly about and the point is off to that one omits any reference to the implicit flow and And so as a result a lot of people looking at open eddy connect, which does feature one flow that you can Technically consider implicit because tokens come from the Physician endpoint as opposed to the token endpoint that like people legitimately ask the question Does that mean that those flows define the open eddy connect can no longer be used and the fact is We already discussed it is in the past Of happy hours, so I won't go into details But the point is open eddy connected trades id tokens Which have a very different usage pattern from access tokens and also in particular form the particular flow Which is in common use they are all sent through post as opposed to your ads and so These new language, which is in the slide, which is a non normative sections of off to that one Clarifies that the fact that the implicit grant is dropped from actually is no longer even mentioned In off to that one in no way invalidates using that particular flow in open eddy connect And my hope is that these little snippets of language and thank you so much for adding it is going to facilitate so many conversations with customers which instead Required the entire world salad with HSCs now instead of now I should just be able to point to this particular fragment in the spec Done. Yeah Yeah, I think the the the confusing part about it was the Way that I guess it gets talked about is that removing the implicit flow is what's sort of happening is the way That people are talking about it, but what's actually being removed is response type token, which is a lot's implicit flow and Response type token. Yes, the intent is to make that go away By not including it for any use of response type token, but that has nothing to do with other response types that are defined elsewhere The big example of course being response type id token, which is an entirely open eddy connect only thing It's an extension of OAuth uses the extension points available in OAuth And it does those by defining new stuff defining new security considerations and new protection mechanisms, which is totally reasonable to do and A lot of people also call that implicit, but those two things are completely different things and This is hopefully going to clarify that it's not we're not trying to bundle those all together You know, I think that the challenge in general was in nomenclature to begin with like it Implicit just means tokens token comes actually not token artifact comes from the authorization endpoint and As you say people used like the only instance the only concrete distance of that when OAuth to the top first came out was a single page apps and Response type token and they're using fragment for returning it So most of the things co-occurring ended up having using exactly the same name. It's kind of like Calling a sneeze and called the same thing because they come together, but yeah So here we kind of were in the same situation And actually we have a question along these lines so Open any connect on top of OAuth 2.1 a hybrid flow. Yes or no, so This is this is related and I'm I was kind of curious whether it needs to be put into this paragraph as well And I decided not to for the first pass at doing this because I feel like it complicates things too much by mentioning All of the different combinations of things in open any connect. However There isn't a single hybrid mode in open any connect there are multiple hybrid modes and the There are hybrid modes that include response type ID token as well as response type token and the intent is to Not use any mode that includes response type token. Basically Response type token is no longer part of OAuth 2.1. So That But that doesn't mean that the other hybrid modes that open you connect uses are bad because those are not response type token. There's there's a I Think I have a slide of this somewhere, but I don't know if I have it fast enough to pull up but there's You know response type ID token with code response type code which gives you an authorization code and that is open back in the in the redirect and Then you can go and exchange that code for an access token And that's fine because that's using the good authorization code flow from OAuth 2.1 And it's using open any connects ID token response type. So that's fine But no, I don't think that there's any reason to use any of the hybrid modes that include response type token because those all have the same problem that we've been trying to avoid so and because of that I always always discouraged people from ever doing it even before to that one enshrined that as in ID token code. I love because it's a Nice graceful slope of complexity like say that I only want to authenticate and so I do a response type Equal ID token and I'm happy with that use it fine Blah blah blah, but then I decided now I want to start calling APIs and then if I add the code The flow remains the same like my middleware still accepts an ID token coming from the front channel I'm only adding on top of it They are a dimension of the code and the access token that I get is an access token for a confidential client It's a very zombie guilty because I do my request to the token endpoint so It's neat. I always love it and now I'd say it's the only hybrid flow that makes sense at this point Yeah I think that's a good way of thinking about it as if you if you only start with response type ID token Then you can just add this response type code in and nothing else has to change like you You're all your other code stays the same and then you just added in a little bit of new stuff as well Yeah, although now I'm going into a paint myself in a corner One a classic discussion of what happens at that point, which I had many times and it becomes a matter of opinion. I'm afraid but Save it that you have your middle Save it your middle are successfully get an ID token and these are the token is valid And so allegedly if you have a response type ID token, you are done You get the other token you validate it you drop your cookie and now you have your session Save it now you add the code on top of it and save it for some reason the code redemption doesn't work And so you don't get the access token. So the question is what should the middle where to do? should it create a session but then Simply not have in cash any access token or should it fail the entire thing and my position is that I Prefer that it keeps working as if I receive a validate it in Before adding code that was enough for declaring the user sign Oops, I think we lost you. Are you still there? Am I still here? Oh? I'm sick here Okay, we lost you for a second Let's Okay, which part did you lost? You were you were talking about Getting just just to the point of getting the getting the code and deciding to Whether to consider that user logged in Right, so to me the idea is if the ID token that I received was enough to Create a session for my relying party then it should remain between because I'm only adding The only exception to that to me would be if the access token you're getting is for the user info and At that point that these integral part of a sign in semantic Let's say that if you just authenticated, but I cannot get The attributes I needed then I cannot say that they are signing in is concluded So I didn't want to get into a rat hall, but I just wanted to highlight that this stuff is deceivingly simple, but Once it comes to implementation you do need to reason about it and some of the choices are non-obvious Okay, fair enough fair enough. Yeah Do you think it simplifies things to use? Only response type code to get both the ID token ID token and access token at the same time then If it fails it all fails It it simplifies it in a way. I don't like it for two reasons one is that It simplifies it thinking about it, but in terms of user experience It creates more pain. Let's say that if you decided that there was an Acceptable situation in which the ID token is enough for signing in then from the user experience They get less errors. So it's better, but from the developer experience. I am also a big fan of Being able to set up sign in if you don't need to call any PIs Just sign in the way you were setting up samol or WS Fed Which allegedly are more complicated But in this particular case they aren't because if you are doing this flow in which you get ready talking from the front Channel then you don't need a secret Because I can you just rely on the fact that the token gets redirected in the correct redirect you arrive and You should need the secret only if you want to call an API and You might want to call an API also for sign in if you don't want a token to end up in the user agent Which is a valid concern or if you want a skeleton token in the user agent And then actually get the attributes through so it's a valid choice but unless you are not in that situation then I try to Delay having to use a secret as much as possible because we know that secrets are problematic like you might put Git repo you might you need to through refreshment or a cycle of them if you have Like a cluster every instance of your app needs access to a secret. So you needed to so It can be done if it's it's needed But if it's not needed, I wouldn't just do it as for like for short cut What has a lot of people in our industry do that because it's simpler like if I just give you a sample for the SDK But that's just the code my life is simpler But I care more about the life of an user the quality of life Yeah, the other one. Okay. Okay, it makes sense Okay, let's see what else was what else was in here. Um, Yeah, the the is was controversial It was it was surprisingly controversial. I thought I was going to be more straightforward. So the it's a relatively simple mechanism for preventing a mix-up attack that only exists in certain cases for example, if A client uses ever talks to more than one authorization server And uses The same redirect URL for all of them. So so we should tell I think to the listeners that there is A draft which describes these particular parameters in itf For the reasons that you just mentioned and it exists as it's own draft And it's going through the process at record speed, but it's not at fc yet It is going through at record speed. I think partly because it's a pretty simple mechanism that is well scoped to a particular problem. So there's not a lot of Not a lot to talk about like there is a problem in these cases It is documented and here's a solution that is pretty straightforward to understand So that draft does exist. It is going forward, but it's not an rfc yet so the question was do we bring this into oaf 2.1 because it seems to make sense and The way that we left it after the interim meeting was No, we're not going to incorporate it yet, but we will revisit the discussion after it becomes an rfc because one of the goals of oaf 2.1 is is to just Collapse existing specs Not invent new things. So until this exists as a rfc It doesn't make sense to talk about bringing it into 2.1 Makes sense. Yeah Requiring htps redirect URLs that one was also pretty straightforward the the um Oaf 2.0 was created so long ago that Deploying htps was a challenge. It was a big challenge and expensive often Expensive certificates you have to buy and let's encryptors change everything And now it's pretty trivial to deploy htps on your applications, which you should also be doing for many many other reasons beyond besides using oaf so it makes sense to just Require it now for all redirect URLs It used to have even to have this note about the authorization should actually Warn the user about the insecure redirect, which I don't think I've ever seen anybody do But it was one of the recommendations of like, you know on the oaf consent screen The idea would be you would see a warning that says you're about to get redirected to an insecure location, which Seems like a very scary warning to give to users And I don't know if I ever saw that anywhere I I haven't and I think that these points to a more general thing I've observed, which is When we put in the specs indications about experience Very rarely implementations follow the spec It's mostly like if it's something that it can't be verified in a certification a test suite Then people just say whatever which includes the Consent like we have a number of places where we say you have to ask for consent And then if you look in real systems, there are always circumstances in which they are for decisions. I will skip it so I'm not surprised that we haven't seen this Warning in the wild yeah, but the good news is that The circumstance that required that sentence or made that sentence appear in the original spec doesn't really exist anymore So it's actually simpler and it no longer will be part of the recommendation That's true. That's true. And it's good that for a development time. Like if you go hit localhost although it is really simple frankly to add HTTPS also for localhost and You have to to be complete like in a spare in the spec we can say, yeah, you can you need You are not strictly forced to use HTTPS for a localhost But if you look at how cookies behave today, if you are not doing HTTPS Then various browsers will interpret the same side property of a cookie as strict No matter what you specify when you write that cookie. And so in practice, even if we make this exception The reality of the user agency is that eventually people will have to Use HTTPS and if you have questions on how to do it like you know the end of that and similar Hit us and we can help you to do that But yeah, and at the spec level technically you can omit it if you want to Yep, there's another another good reason to use HTTPS in development as well Which is there's actually a lot of browser APIs that require it So things like the location API or the I think even the web crypto API requires it And I think so, yeah Yeah, there's a handful of these javascript APIs and browsers that don't exist on a plain HTTP page so I've long ago set up HTTPS for my development machines because It it's basically required if you're doing any sort of modern Modern development these days, so And that's one of the things that I miss to be honest of a Microsoft ecosystem When you add a visual studio, it was just a checkbook saying here was the HTTPS And behind the scenes every test thing be it an API be it a web page Would go through IIS express or modern recent times they have some other I would say test web server and it would just do everything in there What has now is I have to use node because of zero is mostly node Um, although I can technically use a speed of net, but when people look at me and say, huh, are you the guy from Microsoft again? So no no no no no no, here where's not where's not Uh, then a basic I use a cad Caddy is a super easy reverse proxy You just make sure that you have a matching ports between your node and it does have a self-signed certificate on its own. You don't need it to do any OpenSSL magic. It's super simple, just download the caddy, launch caddy and reverse proxy mode wherever part you want. And that's it, really, really easy. That does sound a lot easier than what I've been doing, which is I usually install Nginx on my laptop and it's a full Nginx install. And then I have my own certificate authority that I created ages ago, probably over 10 years ago at this point. And I have that public key up on a web server and I always download that onto my new machines and make my machines trust my own root authority. And then I go and I have a website that I made 10 years ago, where I can be like, hey, please sign the certificate for this domain and my authority will go and sign it and I download the cert, plug it into Nginx and off we go. But yeah, caddy sounds a lot simpler than that. Then running your own certificate authority and moving it on every machine you get. Yeah, yeah. Yes, I'd say it is simpler. But, hey, it's like now you have a complete control and the internal security was a developer machine, so in theory you shouldn't bank too much. But yeah, having your own certification authority is a far stricter than using self-signed certificates. But for me at this time, at this point in time, when I call this just because I'm trying to do a quick proof of concept, I don't really code. So yeah, if I were to maintain my own certification authority, I'd have way more white and all this stuff. Yeah, yeah, exactly. Anyway, okay, last one, which is the most complicated one. The last issue that we talked about during the session was there's a handful of things in the spec that were in there because Pixie didn't exist originally that are trying to prevent certain things like authorization code replay or other mix up attacks and things like that. Things like authorization codes being single use, including the redirect URL when you're going and getting a token, which is like, the redirect URL's already done, it's already been used by the time you've got the authorization code, why do you need to send it again? Well, it's in there for a particular reason. So things like that that basically are more or less not needed if you're doing Pixie and the discussion was whether or not it is a good idea to sort of remove some of these features. And it turns out that there was a pretty strong differences of opinion in a lot of the different points here. So it actually spun out into a whole long mailing list thread as well, which is still ongoing. And my takeaway was that no, we should not try to group all of these together into a single issue. And maybe it's worth addressing them one by one. For example, there are, it was pointed out on a bunch of the discussions that there are other reasons that authorization codes are still make sense to be single use, even though some people don't actually enforce that because some people let authorization codes be reused within a short amount of time in their authorization servers, which is technically not allowed, but people do anyway. And my personal feeling about this one is I would just like to get rid of that one because that's really not needed if you're using Pixie, but didn't make sense to group these together in the way that I thought it did when I created this slide. Yeah, this was a difficult one because the thing that you guys are doing with it to that one is a very brave thing, which is like trying to, you are gonna touch on things that people just took for granted like axiomatically. And questioning common wisdom is hard. We tend to try to build on top of it, run and then going back to it. So it's these kinds of things and visceral reactions to like, oh no, authorization code is single use. How dare you suggest otherwise are to be expected. In fact, like a lot of those things were knee jerk reactions. Is he like, no, you need it rather than bringing the exact scenario that they thought they broke. Just more of like, hey, you are attacking my way of life. But at some points where it were valid, like I think about the thing about the single use of the authorization code, some major, major implementations supported using the same code multiple times within a certain interval, simply because there are practicalities behind it. Like if you have a network failures and now you can't actually, like you from the authorization server point of view, you're already redeemed with code, but from your client, you didn't. Then when you attempt again, then you get to the rejection. Allegedly, you have to go back to the front channel and ask the user again so you get another code. And these was the reaction of some of the old people, not old as in a demographically old, but old as in they've been in this space for a long time. The immediate reaction was like, you obtain another code and say, wait a minute, what about the experience? What about the code you need to write? You are now already running on there back. And now you needed to go back to the front channel. It's like, this isn't on trivia, but some of these arguments are really deeply rooted. And so that's why I'm saying that you guys are brave because questioning that stuff is just going to be more difficult than if this thing would be brand new and you'd just look at objectively the merit of the argument. Now you also need to fight the inertia of, we had these since forever and it works and so why do you want to mess with it? They redirect your eye in the token endpoint is also tough. Cause like for that, I remember in the past using these as a way of differentiating between deployments, let's say that you could have one app that can run in staging or in some special cluster and then in proper production or like public cloud and some other cloud. But eventually the same idea though. Same clarity, but multiple redirect your eye, which you can also like there, some of these stuff could be for multiple reasons. Like you might have a jail in which you want to sandbox some interaction. So you send them to that party, to a different redirect your eye because you want to hit the test cluster, something like that. And so providing that thing, like providing the redirect your eye to the token endpoint was part of a larger transaction which also when you trigger a sign out and maybe the sign out is from the back channel specifying the URL is just a way of saying this is the particular thing I want to sign out from. If you have a multiple deployment of the same app, I acknowledge that this is probably beyond what the specification intended, but at the same time these logic like these requirements of like you can rely on that URL to be there in shrine of a spec for many years. And so you can expect people to have taken dependencies on it beyond the security aspect. So I kind of understand also some of the reluctance also because it feels a bit like a switcher on one side you say we are only going to do into that one what is best practice. And instead of visa seems a bit like going a bit farther. It's more like optimization is like now I no longer need to do this because I have this extra measure. And it's not unreasonable but it's a bit of a different deal than how at least how I've heard to that one to be presented. So I understand why there's so much I won't say animosity, but let's just say passion when discussing those issues. Yeah, I don't think and I don't get the sense any of the discussions have been have any animosity. That's not the sense that I'm getting with these. I just think there's a lot of different opinions different perspectives coming at it and some people have been doing things for a long time. So they wanna make sure that those things keep working which is totally reasonable. But that's why we discuss them. That's why we bring these up. Yeah, exactly right. So that was 2.1. And here I'll say the same thing I said that way in a room I am looking forward and dreading the moment in which into that one we'll come to the native clients discussion. Because I think that in the current formulation that there is a one single set of advice and guidance for both mobile apps and desktop apps. And I think that today the characteristics of the two environments both from the security and from the experience perspective are different enough that they warrant different guidance. And if you try to add one single one you'll pay a price in terms of usability or convenience and stuff like that. So but there also I think that there will be a lot of rooted opinions. So it's gonna be an interesting discussion. I hope it will be productive and not to say pitchforks but we'll see how it goes. We'll see how it goes. I hope so too. That's basically next on the list for after we resolve any of the current things we just talked about which seemed like a lot of the ways as we'll get resolved is by not changing things. Yeah. Yeah, so then we chatted about rich authorization requests. Were you there for that one? I was but I was heavily multitasking. So my recollection is if Brian was presenting or was it Torsten? I think it was Brian. I think it was. I'm pretty sure that Brian presented a deep hope. But there are. Oh, you're right. Yeah. And I know that Torsten also presented something but I don't remember what he presented. I don't actually remember this meeting now that I'm thinking about it. And the minute I don't know why that's why that is going. That's wonderful. Let's see if this one works. This one works. All right. Oh, I took notes. No wonder I don't remember it. Cause you're like very, very just a conduit. You're like, you know, automatic writing from the spirits. And then what's just that much? Yeah, yeah. Totally when you're taking notes you just kind of have to channel the words that are flying by and writing them down as fast as possible. And see it was Torsten. It was presented by Torsten. There we go. Oh, Brian is here and he says, it wasn't me. But does it say we have a little song here? Or like, wasn't me? Yep. All right. What was the recap of this one? The, oh wow. So we're talking about implementations of it, talking about, I guess that wasn't much of an update. Yeah. It seemed like pretty plain. And then we took a tangent on the usual, can we touch our scopes to make them do more than they normally would? Oh, Dennis had concerns about privacy. I'm very surprised. A surprise? Yeah. I'm being funny. Like for the listeners, Dennis is a character on the list. He's like one issue of water. He cares mostly about privacy. So everything he says is about privacy. That's why it was being funny. Yeah. And I remember because he brings up stuff that is from my spec. The access token profile. Yeah. That was kind of confusing. Yeah. In fact, I stayed out of it because Thorsten and Justin handled it. Yeah. Now I remember why, I 100% remember why I don't remember this conversation was because it was very, a lot of back and forth. If you look at the notes. Yeah. A lot of back and forth I was trying to capture. So, yeah. And some of it was also partly in the chat. Like I really don't know how notetakers manage to stay on top of those events because like part of it was in the queuing system, part was in the chat. Also, I'm not a touch typist, which is why I never volunteer to do that. Also, I don't speak English very well. So I don't understand when people speak very fast. Those are my excuses for never taking notes, but you guys are heroes. So thank you for your service, sir. Yep. Oh, I'm probably due for another one because this was a couple of months ago. So get back in that queue for me. Yeah. So then, oops, I lost my tab. Then we had Deepop. Oh, that one was interesting. And if I remember correctly, that was, oh, that was Mike. Mike presented Deepop? You know what we might be remembering? We might be remembering Brian doing these at the OSW. He did present on Deepop with all his fantastic photos and similar. I love this in the notes. Brian starts to start the presentation, but Mike shows up just in time. And starts to start is like an acceleration of sorts. He's like, okay. He's starting to start like, where does it end? It's a philosophical. The Deepop discussion has been fascinating because there's a lot of, I feel like it's another one of those where there's a lot of differing opinions and differing amounts of how far people want to go with it. And it's still, there's still a lot to untangle, I feel like. I think that the expectations need to be set. Otherwise we'll just keep iterating. And there is a lot at stake behind Deepop. Like to me, one of the most urgent reasons to have a Deepop finalized is a FAPI two. Like a FAPI one, like the financial service APIs is predicated on MTLS. Like it doesn't just say, I need sender constraint, I need MTLS. And MTLS is an interesting beast because given that it talks about the pipe, if you operate a service at a high per scale like a lot of us do, and you might be using things like Cloudflare and similar. So the way in which we implement this has important infrastructural implications. And so in FAPI two, the language is more generic, it's like use sender constraint, but let's let's make each other in the face. Right now, sender constraint is MTLS and maybe tomorrow Deepop. And Deepop is the only other contender because we know that token binding died and dignified the death when Chrome abandoned it. So yeah, I should be more careful with my language because I know some people are still processing the grief of that happening, but that's what happened. But anyway, the point is, I think that it would be super useful to have Deepop at least a subset of it finalized. But the challenge is that Deepop is different things to different people. On one side, there's the crowd that says, I want to use Deepop for everything, not just for single page apps. The other is like seeing confusion with credentialed clients or like actually having more stable credentials. Other people are more interested in, in terms of performance for hyperscaling, which using asymmetric stuff as we do today might not give them the level of performance that they need. So I think it would be useful for us as a community to put some stakes in the ground and say, okay, was that all interesting problems? The problem that we are pursuing with this version of the spec is, and once you agree on that, then we can resume the technical discussion. Because right now we are, it looks like on the surface, we are discussing technical merits, but in reality, my impression is there are different word views in five. It's not just a matter of the technical stuff. Discussing of us to level at the same time is super hard. So my suggestion to the stewards of the spec would be clarify scope and then we can try to close this thing. Otherwise we'll just keep, I can't say being idle because there is a lot of work. Like this is one of the most exciting things in our space, but I really do hope that we can find ourselves to deliver something so we can implement it. I agree, and I think that was one of the pieces of the presentation at the interim that I was seeing a lot of confusion about, which was the, I was seeing a lot of discussion about like particular solutions or applying it in a certain way without providing the context for it. So I felt like that was the missing part was like, okay, we understand you want to use it for this, but like what exactly is the problem that you are facing that is causing you to want to do that? And I felt like those problems weren't getting expressed well enough, so that was one of the pieces of feedback that they did take away and then came back onto the mailing list and did some more discussions there and provided that context. And I think that's been very helpful because if we are all, if we're, if the group has five different problem scenarios that everyone is thinking about differently and then we all go and want particular aspects in the spec without agreeing on the problem space, that's what causes the mess. So if we can all get on the same page about like, here are the problems we care about, these are the ones we think we can solve with this, then it should be a much easier discussion to actually solve those. Yeah, 400% agree. 400, cool. Okay, D-POP, we talked about token exchange. We did. So then the security workshop was the, was a little bit later that was just two weeks ago, in fact. Yeah, that's crazy. Beginning of the month at virtual, virtual coordinates 00. Very Cartesian. And so this event for people who don't know, actually hold on, before we get to this, there was a question that I just remembered, I saw that I wanted to mention before we move on to the security workshop, which is this one, is there any way to join the OAuth Working Group Meetups as a listener? Yeah, you can join the OAuth Working Group Meetups as a listener. Isn't there an IPR? Yeah, so you have to, like there isn't a special listener mode, you can just join it. So there's no like membership process or anything, but if you are interested in joining, you can absolutely join. You do have to agree to the terms of participating in the ITF Working Group, which is pretty straightforward. And yeah, you can definitely tune in. The, they are, the interim meetings are held, I think usually in WebEx, and then the recordings get put up later on YouTube with the rest of the ITF meetings, but anybody's welcome to join. There's no- So now I might be confused with the OpenID Foundation, which is a different animal, but I know that apart from special events in which explicitly it said no need to sign an IPR, like the ITF has this like note well, which includes something that you are agreeing upon, and they might be a document to sign, like if something that you might have to do with a doc you sign in order to participate. So- I don't recall if it applies to ITF or not. That is not the ITF. I'm trying to find the link now. Let's see, not now. I'm trying to find the link for it. It is, I think it's in this Working Groups page. Information about participating in the ITF. This is the page. So- Or maybe exactly, you can read, but if you want to send mail, then you need to have, there might be some of those aspects. The thing is there's no like, there's no formal, yeah, it even says it here. The ITF has no formal membership, no membership fee and nothing to sign. By participating, you do accept the ITF's rules, including the rules about intellectual property. There it is. It is very accessible. Fantastic. Yeah, then it's really open idea in which you needed to do the IPR. In fact, I could- That sounds- I had an IPR for Connect, and then I started recently contributing to FastFed, and I had to go and sign another document to extend my IPR scope, but apparently the ITF is more relaxed. W3C is even worse. There's even more hoops to go through if you're going to participate in those. Yeah, even for the community group, I get to sign something, which I remember because like, of course, before signing anything at all, I sent to legal, and they said I had to venture. Every time you speak with a tornado, like, of course they mean well and they want to protect you, but it's kind of like, it's different than working with engineers. It's just a completely different plane of existence. So yes, that is, this is the link for how to participate. It's essentially you just show up. That's the short version. And if you're curious about when the meetings are, I've been putting them on this events.oauth.net, which is an unofficial document, a place of OAuth events. I put the ITF meetings as well as related community events on here as well. It's incredibly useful. Like, that page is fantastic. Yeah, I feel like it's a lot more... A lot more accessible than some of the other ITF pages. Oh, yeah, absolutely. And also I can hear you all. You include everything, including things like IAW, right? Yeah, yeah, we don't, oh, I shouldn't, I need to add that now that it's got the date and registration's open. But the past events, like you can see we've got, yeah, Identiverse, when there's like a major conference that's relevant, I'll put it on here as well. The Federation of Browsers Workshop, which is the W3C group that you've been involved with. So I've been putting on here as well. Very nice. And, yeah, the, I was putting our happy hours on here, but I haven't been lately because I feel like it's borderline. Whether it makes sense for going on here. Anyway, that is a great place. And if you do want to add it to your calendar, there's a feed at the bottom, just subscribe to that in Google or Apple calendar or whatever you want your phone even. And you'll just get the events on your device, which is great. Yeah, so the, yeah, the ITF meetings, those are the like official ones where there are, you know, minutes on the record, official decisions, all that kind of stuff. And you do have to agree to the ITF's IPR rules and stuff by participating. It's very informal, but that is technically the official. Totally separately is this OAuth Security Workshop, which is not part of the ITF. It is not official in any way. It is a community organized event that invites people from the OAuth community as well as others. And it's mostly others. Mostly others. And no, it's a handful of OAuth people that offer them. So the thing with the ITF is, first it's the weirdest conference you'll ever go to. But the main thing is if you are interested mostly in one working group and the main advantage is networking, like in the same space, you'll find a lot of experts that care about the same things you care. And so you can add a lot of good network discussion, move things forward. It's very meaningful. But in terms of structured moments, those are ridiculously small. Like you travel all the way to this city and then you have something like two hours. And your presentation is something like 20 minutes. I remember the first time I did in person. I went, I got to the mic to present my spec and have some high bandwidth conversation. And all my time was lost because someone said, I didn't really have a spec, but, and he had a question which we already covered in the mailing list. And the entire 20 minutes of a high bandwidth, we had a gun. He was like, but luckily at dinner in front of a beer, via everybody's discussions with the idea to have. Do you know if the next ITF is going to be in person? If I remember correctly, it is virtual. No, it keeps going back and forth. Now I can't remember the current state of it. It was going to be, it was going to be in Bangkok and then that got canceled. But I think it's still going to be in person but they don't know where yet. But now it's getting to be pretty close. It's only three months away. So I'm wondering if it's back to virtual. Yeah, IAW is in person, but Omicron. So who knows if it will actually happen. I hope so because I am so tired. I'm so done with Zoom. And after having tasted Identiverse and the European Identity Summit and Authenticating Seattle, yeah, virtual stuff. Like I love the iOS W, but virtual stuff is got really old. Let's face it. Yeah. The OAuth Security Workshop has been an in-person event up until the latest last year. And it was virtual this time two weeks ago. And it was a great event. I thought it was well-run. The sessions were, there were a lot of useful sessions. It was pretty dense, a lot of good discussions in all of them, but it's still a virtual event and it is still very tiring being on Zoom for eight hours, two days in a row. Yeah. And you don't get the snack breaks in between to chat with people. So, yeah. Yeah, there is no serendipity. It's all structured at the time. And I guess that we'll talk about the content next time given that we now have only three minutes. Ah, I guess we are out of time. So, yeah, we'll have to chat about some of the interesting sessions there later. But the workshop, it was two days, full schedule, both days, lots of great sessions. You can see the timetable here. It's mostly time zone conversions is what the space is used for because people were joining from everywhere. But it was really good, a lot of good sessions, a lot of good notes coming out of those. All the videos from those are online as well now. So, if you are curious about any of these topics, you can go back and watch those as well. And I think it's gonna be worth to do commentary. Even like, guess what the next time we'll do a day at the hour will probably be January. But my suggestion is we should pick it up where we are leaving it today because some of the ideas very well actually, pretty new. So, I think it's worth having a high bandwidth conversation about it. Yep, I think it's a great idea. So, yeah, if anybody wants to do a little bit of reading through the notes and come with any questions, that's great too. We can kind of have a discussion that way. So, yeah, sounds good. That'll be a great place to pick up after next week. Wonderful. I don't know why I said next week. Next, it'll be sometime in January. Yeah, next month. All right. Yeah, thanks for joining. Thanks for joining live and drop questions in the comments later. If you have any other questions, we're always watching the comments on here. So, with that, I guess we will say goodbye for now and make sure you subscribe to this channel so that you can get notified when we do have the next one. We post them on YouTube. Otherwise, you can also find the schedule of upcoming events at octadev.events. That's our calendar for the workshops that I run, the OAuth workshops that I run, as well as conferences that our teams are speaking at and these happy hours get posted there. So, lots of good places to find us again later. All right? Wonderful. Wonderful. Thank you all. Thank you. And happy holidays. Bye.