 We have Victoria here as well. Plan for today, we're gonna talk about, catch you up on what's been going on the world of OAuth. That's been, it's been a while since we've done one and we've since had a, our first in-person meeting with, of the OAuth group in like two years. So, yeah, that's been, that's exciting. So, yes, what do you, and what do you wanna talk about today, Victoria? Well, Elon, I'm hearing, is about to buy Twitter. Yeah, we'll worry about the identity implications when and if that happens. For the time being, I share your excitement for finally, finally, after two to five years, we've finally met in these special flavor ITF. Like ITF normally is pretty large events like 1,000, 1,500. Instead of this time, we wear 400, 300, but still the spirit was there and it was such a breath of fresh air to be able to keep flogging dead horses of identity, even at dinner, instead of just a zoom and it starts and ends and then you just go on with your day. So, it was fantastic. Yes, let's talk about it. Yes, exactly. All right. Yeah, I know the last time we met in person, I believe was Singapore 2019, is that right? November? Which I skipped. The last one I did in person was in Montreal. Yep. I did go to the one in Singapore and did not know that was going to be the last trip for so long. So, yeah, it was nice to be back. And it was a smaller group in person in the meetings, but I felt like we had a really good turnout of the OAuth group anyway. Yeah, yeah. For OAuth, it clearly, people had pent up things to talk about. And in fact, not only we had like decent number, but the agenda was packed and we used all the time available and then we had over age time and then we had all the social moments in which between laughs and drinks, we stalked shop the entire time. So for Identity Geeks, it was a fantastic five days. For Identity Geeks is very good way to caveat that, yes. Yeah, so this was the few days we had. We had two formal sessions on the schedule and two official side meetings, which are the way this works is the groups get one or two slots, usually two hours, two hour slot on the schedule during the week at the ITF. And that's actually one of the reasons that not being in person, the OAuth group and a few others I heard kind of switched to just doing interim meetings and not trying to cram everything into the one week because it's hard to schedule that many meetings across that many different groups and a lot of people have overlap in multiple groups they participate in and with everything being virtual, there's not a lot of benefit to cramming everything into a week. So a lot of the groups were just switching to interim meetings, just doing it throughout the course of the year and that worked really well for virtual. I think it worked a lot better virtually, frankly, but there is a lot of value to being in person condensed into a week. So yeah, we had two slots on the schedule of the two hour meetings and then we had two side meetings which basically meant there was a room booked for a certain time and it was not necessarily a planned agenda and it was not recorded and it was not necessarily minted in the same detail but it was a time that was published on the schedule and people knew that there were going to be conversations happening there. Those were also very cool. And the caller for that room had microphones for like there was like these horse shoe-shaped table and there were microphones for each place and those microphones had like a sequencing in which when you would take the mic you would proactively disconnect anyone else who was talking and so it was funny to see the various different personalities that would actually forcibly interrupt people so that they could get a little bit in. I will let you guess in which category Aaron and myself belong to. We are not going to do that. It was a fun moment of color that you would not get if you'd be online. And it was fun. I thought that was a, it was interesting. Honestly, that room was too big for the group that we had but it was really funny. It was, yeah, it was the mics you see at those like government panels and stuff where it's like everybody has their little box with the little stick pointing up and a big red light showing you who's on. And yeah, I think, I don't think anybody realized what was happening at first, but like as soon as you would press the button to say something, it would shut off wherever else was talking. It was just, and because the room was so big you could only really hear whoever was miced. So it was a very effective way of just being like, my turn. Yeah. So, yeah, let's, let's jump into seeing what was on the schedule for the first session. We had four topics on the agenda. So chairs update, that was, that's just administrative work, right? Nothing significant really. I think it was some status updates on some of the drafts in terms of where they are in the sort of post-working group life cycle of working their way through the rest of the machine. That was a mystery, yeah. A couple of victory laps for things that since the last time we met became RFCs. And it's not about accomplishments, so it deserves a moment of public acknowledgement. But yeah, most of the administrative work. Also true. Yep. I think your draft was one of those, right? It should have been, but I think that it was approved before the last virtual one. So it wasn't mentioned. It was mentioned during the virtual one. Like the sign we mentioned there. I think the issue, the issue draft, that was like the fastest that I've seen history. And there was another one which now escapes me. Well, it's probably in the minutes, right? That's possible, but we can follow up. 9207, that one is issuer, right? I think it's the issuer, yeah. Yeah, and then JSON token response is in the queue. Thumbprint URI, rich authorization request, security, BCP have consensus, but are pending write-ups by the chairs. So there we go. And also in particular for the security, BCP things did happen during ITF. Yes. I feel like this is a draft that just keeps going. It just keeps giving. And at some point, we have to just draw a line and finish it even though it's not like the security of the space won't keep evolving. So, but yes, we'll get there. Yeah, you're right. We needed to get a snapshot of the state today. And I think at this point we are reasonably close. Let's say that there was this one last controversy, which I believe had been ironed out. Also thanks probably to the high bandwidth conversation we had in Vienna. So I'm confident that Daniel Fett will soon be able to rest and have this thing finally out. Yeah, yeah. So the first draft we actually talked about was D-POP, which we have mentioned quite a few times on this channel before because it has also been around for a while and gone through quite a number of changes. Yeah, this has been worked on since 2019. Yeah. And I think in 2020. We were both in the place where it was born. It was in Stuttgart. And we were doing the OSW and someone came up with the idea. And then these uncommonly numerals group of offers decided to put it together and here we are. Yeah. Oh, that reminds me of, I think one of the first items on the agenda for this document was a comment about the number of authors of this draft. Correct. It was unusually high. And so Roman mentioned that, Roman is the area director, mentioned that it's unusual and some people may frown at it, but that he was willing to make the case for all those offers if everyone felt that they should have been all those offers. And I think that Brian was presenting Brian Campbell and basically complying to it. Yes, they all had significant contribution. So they should all be in the offer. And so we are keeping the plan of record and they are still there. However, I'm realizing now that we are doing a classic mistake of identity people. Like we are just assuming that everyone that is tuned to know what the D-POP is. So my suggestion is let's use one sentence to describe what that thing is. And if I might suggest one, D-POP is an extension to off, which allows you to use tokens by proving the possession of one particular key, which eliminates many of the shortcomings that you have in classic off in which tokens are better tokens. Better tokens are tokens that can be used just because you own the bits of the tokens. Hence, they are vulnerable to many of them. Hence, they are vulnerable to many of the middle attacks or any attack that leaks the token. But with D-POP, you have a mechanism for negotiating a key and using a key to protect part of the message so that if a token and the message you get intercepted, the best you can do is to replay exactly the same message. But if you want to use that token to send another message, you can't. That's the most concise way I can think of. What do you think, Aaron? I think that's a great description. I will just add it is one of a couple of different attempts at this problem. So it is specifically created for being able to be used by single-page apps, whereas some of the other options are not necessarily possible or are very difficult. So that's the specific thing it's targeting, but it can be used by any app that can handle a private key. And she has plugged it. We have one episode of the Identity Unlocked Podcast in which Brian Campbell goes into the details of D-POP, of MTLS, and this space in general. Yeah, yes. I remember that one also very good. Oh, it looks like Brian is also here. Do you have a comment, Brian? Do you want to say something? Other than just the general sigh of exasperation. Anyway, yeah, so what happened with this draft during the meeting? It seemed like it was mostly a status update. There were a couple of little details to be worked out, but I don't remember there being a lot of discussion. Yeah, it was quite a controversial and I remember being very happy with it because the last few times there were some discussions at the fundamental level, like should we be able to use symmetric keys or should we be giving guidance that assumes stuff inside the token rather than just being genetic in respect to how the access token is encoded. But none of that happened this time. This time it was a smooth sailing. And in fact, at the end of the meeting, the chairs made the working group last call on the board. So usually a very good indication that we are within a line of sight or a finished line. Yep, yep. I think, yeah, I think that's great. That's right. And I remember that it was scheduled for 45 minutes. And if I remember correctly, finished well under the amount of time scheduled for it, which has been unusual for that draft in particular in past meetings. Very true, yeah. Yeah. So here's a question about this. Great question from NoName, who ironically I have seen on this channel before. So NoName is a name in itself. Single-page apps without a backend can't handle private keys, but if we have a backend, why not use it as a proxy combined with the code flow? So good questions, two things. Single-page apps can handle private keys, which is a, I guess relatively recent development in browsers, but there is the web crypto API. And that API lets the browser generate non-exportable private keys that the browser can then use to sign things with. So that's the idea is leveraging that mechanism in the browser to be able to do things with private keys so that we can have a lot better security and non-exportable being the key there because no pun intended, the key. The non-exportable meaning that then if you have an attacker who is able to get JavaScript into your app for some reason, that attacker's JavaScript also can't extract the private key because your code also can't. So that's great. More things should use that. However, there aren't a lot of good well-established patterns for using that yet, which this is supposed to be one of them. So I'm excited to see that happen. And on the second part of that question, which is why not using the backend as a proxy, that is not always trivial. Let's say that it depends on what you're doing on your backend. For just having the backend is not like a zero one. You can have a backend, but not have a full control over the code that runs over it. Or you might want to limit the amount of computation there because it's expensive. Or you might have APIs that are geo-located and having to go through and if your backend would force you to embed that advanced logic to deal with that considerations in general. And also you might have your workforce which mostly work with JavaScript. And so proxying everything would be expensive both from money and from the functionality, performance perspective. And also it might just not be the way in which your developers operate. So there are a large number of practical reasons for which just the fact that you're running some code on the backend might not allow you to do everything on the backend because everything you do in there has implications especially at the large scale, which is one of the reasons for which we have a Brian a couple of years ago. We suggested the thing like a hybrid between doing everything in JavaScript and doing everything in the backend. It's a bit of a back burner, but eventually it will come back. Yeah. And then I guess one other thought on that is yes, a backend is generally less risk of a token leaking from some sort of server side system compared to a single page app. So that's why it's very common to use bearer tokens from server side apps, both as an access token, but also a client secret where it's a shared secret. It's very common because there's a much less risk of a token leaking that way compared to a mobile app or a single page app. However, there's not no risk. And sometimes that risk is still too much for someone and they still want to use a sender constrained token even though it's server side code. And there's, I believe the security BCP talks about some of the more concrete examples of why you may want to do that. But like one example, and obviously these don't apply to everybody because it's very much dependent on your architecture and how many different things you have moving parts you have, which is why not everybody cares about it. But one example would be if you have a bunch of different resource servers and one of them maybe you don't trust as much for some reason. So a client is sending an access token to a resource server. The resource server then has the bearer token and that resource server could use it to go access things in other APIs because it has the token. And if you want to prevent that from being possible you can't use a bearer token anymore. You have to do something else. So that's another reason to add in that sender constraint. You're laughing. Do you have an example of this? Nothing is more of like I'm having traumatic memories that are coming back as you on this from this. Cause like this is at the core of the controversy about the audience of access tokens because there is like the camp that suggests that the access tokens should be like wide spectrum artifacts that can be used with multiple resources. Whereas there are others and I mean that camp that suggests that the audience should be as specific as possible. And then whenever the client needs to call a different resource server, it should just be able to use a refresh token to ask for a different token on the fly so that you reduce the blast radiance. Now the thing that you are describing is a potential justification to be more on the other camp because at that point, even if you don't have an audience that protect you with a specificity or a use of a leak access token, you can just prevent the leak or make the leak non actionable from the attacker. So potentially you can reduce the traffic and have less tokens in your cash and less needed to hit the token endpoint with a refresh token. So this is another way in which having a viable sender constraint mechanism is a game changer. Cause like ready today, there is MTLS which is already an RFC and things like FAPI, they financial APIs demand it and say, if you want to be compliant with our stuff, you must do it. But it's hard, like it has a lot of considerations about the edge SSL terminators, all of that stuff. Once we'll have Deepop, we'll be able to use it in ways that in the past, we've been solving differently like with the audiences. So I'm really looking forward for this thing to be officially done and published. Yep, yeah, great questions. Let's move on to the next one. Let's see, that was redirection attacks. This was something that Rafat brought up who he is the chair, so we get to step away from his chair duties to present this. This is not a draft, it was a topic of discussion. So looking through the minutes, he did present some slides, but the core of it, I remember correctly, was the idea that there are several places in the OAuth specs that say to redirect if an error has occurred of some sort. And there is a very clear mention of if the redirect URL of a request is wrong, don't send the user to that redirect URL because that's a very clear problem. But there are some other cases that are less obvious about why they might be problems. And this was an attempt at describing and capturing those other situations that you may not want to send the user away from the authorization server in the case of an error. Yeah, the way I understood it was that this is basically a trick that can be used to bypass spam filters and similar, or like lists of known battery directors. Because you can, as a malicious developer, sign up for one OAuth development service, register your app and register whatever redirect URL you want legitimately because you own that particular application. And now you can send it to someone, a link to that app that errors out on purpose. So that there is no gateway that will, like no verification step. Like if someone clicks that URL, they will be redirected to the attacker redirect and given that this thing, if a redirect comes from a domain, which is the identity provider domain, which is very likely not a block listed by the browser, so by any mechanism you want, now you bypass the filter because if a filter would start blocking that stuff, then it would also block legitimate logins. And so this is basically a flavor of an open redirector, which is not really a security thing, like no one is stealing any tokens, no one is doing any elevation of privilege, but it is a very handy trick for skipping spam filters. Yeah, interesting problem that I believe this has been seen in real life, like people are using this actively, which is why it's being brought up. Not gonna name any names, but it's not really anybody's fault either. It's not like anybody's doing anything wrong here. And that's the question here is, like these OAuth servers are doing what's documented in the spec today, and is that something that maybe the spec shouldn't be recommended because of the possibility of being able to be used in this way? So this, like you said, it only applies to authorization servers that have a sort of more public way for developers to create apps on. So like it does not apply to your own organizations, private OAuth server where your admins make apps because the only people who can make apps are your admins, and if your admins are getting tricked into registering redirect URLs for attackers, that's a different problem. But it does apply to things like Twitter or GitHub where they're just public identity providers and anybody can show up and make an app on them. And there's very little approval process, if any. And then, yeah, then you can use those URLs to in emails or SMSs where they don't get caught at that stage. So, interesting question. And if I remember the resolution on this was to document the situation and Daniel was okay with adding it into security BCP if it was relatively straightforward text that was being added. Rather than recommending totally different behaviors, it was more just pointing out the fact that this is a concern that some maybe should be aware of, that was. I think we left it. Thank you. The thing is the reason much that can be done that we don't break things without out there, like the reason for which this thing is structured that way is that typically the authorization server doesn't have enough context to emit an experience that would make sense for the user. So, if we were to say, you know what? We are not gonna do these anymore. And now the burden would be on the authorization server to say, hey, these happened. These happened would probably be impenetrable from the developer perspective. And also returning an error so that the client could see not a 200 but an actual error or not a 200, not a 30 or two but a 400 class error. And then at that point, it would be on the client to do something you'd still have like a pretty significant breaking change. Like people would need to start managing that particular circumstance explicitly. Whereas today, whatever, you just come back and maybe you can resume the existing flow if they redirect, the return URL is not, it's a neutral one. So, I think that the solution that the working group came to which is documenting it is a valid one because the authorization server can always decide to do that anyway. And then at that point, if the circumstance apply if the authorization server is like the ones that you described on public one that decides that they don't want to do this, they can do whatever mitigation they think is most appropriate. You mentioned the curation, they might start saying, okay, before you can publish the app, some human need to look at it. I believe that's what happened with Facebook today. Like you actually, what happened claims you need to ask for in approval process. Of course it's expensive, so we cannot mandate it from the spec but I think it's good that we remind people, hey, if you are worried about this particular circumstance this is something that might happen. Yep, yep. So I think that was a good discussion. Good discussion of that went pretty well and we'll see what comes out of that on the mailing list. I haven't seen any discussion of this yet, so. I haven't seen much of a couple of repos from the D-POP last call, but in general it seems like people got an indigestion of interaction and so now the mailing list is a bit slow, so whatever. Yeah, need some time to recover from that much social interaction after two years. Yeah, especially for us IT people we tend to be introverted, introverted extroverts so like any combination, so. Yeah. Yeah, so then the last one for the day was OAuth 2.1 and we had more than 30 minutes left by that point so got a nice good long chunk of discussion in on that. That was fun. Yeah, it was some good stuff. Let's see, these are the minutes. Idea, so the way I usually do these is I try to give a recap of what changed since the last time we've met and then try to pull out a few discussion points worth. Worth talking about in the high bandwidth situation where everybody is that are easier to do there than the mailing list. The mailing list is open all the time for discussions, but some of these things are just easier to knock out in person. One of them was bringing in the ISS response parameter since that was previously not an RFC and didn't seem appropriate to roll in, but now we agreed that it is appropriate now that it's an RFC to at the very least point out where it's relevant and reference out to the RFC so that is on my list. The nice thing is that that one again, it flew through the RFC process because it is just such a simple mechanism and solves a specific problem that is not necessarily applied to everybody either so it was very easy to sort of be like, this is a little box, here it is, you're done. Yeah, a couple other good issues that we talked about. This one was interesting. I found text in the original OAuth spec that says access tokens have to have a limited lifetime, bear tokens must have a limited lifetime. And it's in the core RFC and I know for a fact that there are implementations of OAuth2 that do not do this, that issue unlimited length access tokens. So that was my question was, how do we reconcile this? Because the real world does not reflect the document. So what do we do? Now it was a very good one because I think that people conflated the need for something that verifies whether an access token should still be accepted. Without like granting that thing as a given forever versus one tactic of enforcing that which is to limit the expiration time of the access token. And the one thing that I said back in the day and I still stand by it is that today we have countless examples of continuous authentication in which we continuously assess the risk associated with a particular transaction and we decide whether we are still okay with whatever the user or the caller did in order to prove their point that they have the right to access this resource or whether instead we decided for any reason like location, time elapsed, clear circumstances, time of the day, whatever we don't know yet. This thing might decide, okay, I'm no longer okay we have accepting these credential as something for giving you a service I need you to do X, which can be something else. And you can actually do these with a token that has no formal end to its lifetime. Let's say that if for some reason people before using the token always does something that revise the session then this is like a paper plane with people with a blower sander that keep this paper plane flying forever. Then the moment in which someone doesn't use the blower it will drop down, but it doesn't matter like it's not like you need to intrinsically place an expiration like I don't need to put a timer that makes the paper plane explode at some point because they've been in the air for too long. So then I think that these kind of resonated with a room, like people came from different angles but the substance was that like we have other ways of deciding whether a token should be accepted or not and the expiration is not the only way and enforcing it would only result like constraining people to have tokens that must expire will result in many non-compliant implementations. So that was the substance of the discussion I think. Yep, yep, I agree. So I think it was turned out to be more about clarifying the intent of that statement in the spec because if you read it quickly it sounds like tokens have to have a scheduled expiration date in the future which as we discussed was not the only way to have tokens become invalid later. So that was a good discussion, yeah. It was, it was. And I think that it's going to be a good clarification. This is one of the places where having off 2.1 is objectively a good thing because it gives us the opportunity to clarify something for which very strong consensus are. Yep, yep. So the biggest part of the discussion here was around the credentialed client. Yeah. This was, so, oh, right. Okay, I remember how this happened in the meeting. I had my list of issues that I pulled out to talk about with the group that I thought were relatively bite-sized chunks that we could have a good focus discussion on and come to a resolution of some sort. And it worked, it went great. We got through my list and we still had like 15 minutes left during the, of meeting time left because we had kind of breezed through all that stuff in the beginning too. And Brian was there and pointed out that there was a discussion on the mailing list about credentialed clients and then the discussion kind of just stopped and nothing actually happened with that discussion. And honestly, I had completely forgotten about that entirely. So I was like, yeah, please bring that, pop it back up on the list. And then we were like, well, why not just talk about it right now since we have time? So we did and that ended up being another good discussion. So linked out to the, pull up the mailing list thread about that that happened previously. And yeah, so quick backing up really quick. The idea with credentialed client was introducing a new term for a concept that already existed in the core spec, which was something between confidential clients and public clients. Confidential clients meaning you can have credentials public clients meaning you can't. And then however, there's other properties of confidential clients that are mentioned more often than just whether they have a secret. It's things like it's pre-registered, it's known, it's all these little, all these ideas tied up in it. And then in several places, there's this thing in the middle, which is an app has credentials, but you don't necessarily trust it with the same level of confidence as a confidential client. So the idea was to capture that as a name, but I think it ended up causing more confusion than it solved. So, scrolling through the minutes on this is a treat because I don't remember who took notes, but they took very good notes. And there's a lot of discussion that happened with this. So, I like this one. David provided feedback from remote, which was inaudible. It's very literal. Yeah. Also, it beat concerns. No, I'm very concerned. But anyway, so here like there's my summary of this thing. I think that some part of the group wanted a very simple clear cut definition, which says, if you have credentials, you are a confidential client. There is no doubt. And the thing is that from the off perspective, I think that works. Let's say that if you're thinking as off as like algebra, that thing is a good operator, a good predicate on that algebra. But the thing is that for me off is a tool that people use and they somewhat model reality using some of the primitives in this space. And the reality has many other considerations that go beyond that particular detail. And in particular to me, the main issue I have is that when off was first conceived, the native and public clients were kind of like afterthought, let's face it. Like the main thing was I have a website which he wants to talk about to another website and it's gonna do it on behalf of my user. So something with agency that it has its own independent identity and all the little tricks were devised to support this flow so that this thing can do stuff on behalf of the user even when the user is not present. Then like many other things in identity, we took that thing and somewhat tortured it to also model scenarios with native clients, for example, situations in which distributing the key is complicated, protecting the key is complicated but here is the part that people usually miss. The main difference between what we commonly think of as public clients and confidential clients is that there are many instances of the public clients. The public clients are things that like run on my machine, on your machine, on your phone, on his phone, on my watch, all of that stuff. And so when you use the client ID in there, you are just pointing to a set of configurations. You aren't really like doing anything serious from the security perspective. Anyone apart from tricks on iOS for claim the domains and stuff like that and Android, anyone can use that client ID. And so now if we bring back, we say, okay, if anything that has a credential is a confidential client, now what happens is that if you have those flows in which you have multiple instances of apps running on different devices and you somehow add the ability to provision a key afterwards, now all the machinery that was built for confidential clients might not work. Let's say that a lot of the assumptions there are, that's a website, it's a singleton. I have a list of users, it's a shared resource, whereas the native client is, I'm the only one running it, apart from small situations like how to look on a shared computer. But in general, there are many other considerations that are baked in the tools that people use for implementing off in practice, because the goal is never to implement off. The goal is to implement the scenario. It just happens to use off. And so if now we bring back these consideration, I think that we owe to the world to explore what are the potential implications as in like, now am I gonna have a lot of features about the concurrent users, which are useless when I'm building a native client? So maybe we just needed to find the stronger way of the coupling application types from client types, because they have different properties, but good luck with that. Because if you ever landed on the off-zero page for developers, we have like a list of technologies like Angular, React, React, NetEven, similar, people click on the technology and we have them in the path for actually getting the right choices from the off perspective. If now you decouple client types from the application, it becomes very tough. Like people need to be experts in identity. We can no longer make some of the work on verbal health. So I think it's a good discussion to have, but I'm worried about the long ranging implications. And I'm worried that a lot of people involved in the discussion live at the protocol level as opposed to actually seeing perfect. That's the stuff that you're showing is exactly right. Instead of working with people that know nothing about this and helping people to be productive and effective when using those technologies. So I'm scared of this discussion because I know it's gonna be difficult and I'm worried that if a developer is not shielded by some of this stuff, we might end up making identity harder rather than easier. Whereas my hope is that we get to easier and easier layers rather than the opposite. Yep, and we have the same thing on the Octa developer site as well of what kind of app are you building? It's common. And I think for me, one of the things that I've also been trying to help guide the Octa product and is decoupling the idea of whether the client is authenticating from the concept of Pixie and from the concept of application type because those traditionally like in the older way of thinking about OAuth all kind of came together as a package like you're building a single page app that means it's a public line. There is no way it can authenticate and it means you're doing the implicit flow. And then if you're building a mobile app it means you can't authenticate you're doing Pixie and there were these sort of bundles and a lot of what we've been doing in the OAuth security space in the spec is to simplify that whole thing and what it's doing is it's removing those bundles. And now it's just, if you're building an app you should use the authorization code flow with Pixie. If you can protect a secret, you should be using a secret. Those aren't related concepts. They're different concepts. So I've seen so much documentation in so many places that are like Pixie or client secret and those never were difference. They never were opposites. They just happened to often be bundled in different types of apps historically. So I think that's all kind of getting wrapped up in this idea too. And yeah. It's very hard. It's very hard also because like we make security considerations and if a customer does too of course but depending on the nature of the business depending on the type of users and similar there are a number of different considerations that also come into play. So sometimes when you're working with consumers they funnel is the guard of that particular company which if users don't go through a funnel the company dies. And so that's where they have a different tolerance for like whether the user session is a fresher not fresh whether they use the web often or whether they just use username and password versus a company which instead as like employees or like a big value like in which to do financial transactions and similar which the tolerance is different. And if we operate on the protocol level and project these to the customers then we somehow flatten everything on the most secure side of the house where unfortunately the effect sometimes is that people completely opt out. As he says, okay this is impenetrable it's just way too complicated for me. I'm just going to do something like completely custom which completely like break like we lose when that happened like when our stuff is so secure that no one can use it, we lose. So it is a difficult compromise. And I think that we have yet to crack the right angle but the application type is one of the best defaults we can latch to. And although it's good to give optionality like the classic have a spa, do I have a backend? Do I not have a backend? Can lead you in very different directions for what we said earlier but sometimes that question is enough to turn off a lot of people that will just leave saying I don't know what you're talking about. So I think that as we work to make the stuff usable and consumable by users we'll need to maintain some flexibility that is not always the way in which we operate when we do all standards. Yeah, definitely a challenging balancing act. Oh, Justin is here. Justin was at the meetings. Justin says what we need is a clear idea of what a client ID actually means and because it means like three or four very different things. Yeah, I think that's part of why this is becoming a difficult conversation because yeah, it means certain things to different people certain people have grabbed onto particular meanings of it and hard coded that in their systems and what that means everybody doesn't isn't always the same. So that's why we're getting these different differences of opinions of oh, but a confidential client is just something that has a secret versus oh, we made a bunch of assumptions about confidential clients that have credentials that maybe you didn't. So you might be able to say that but we can't. So there's all these, yeah it's it does mean a bunch of different things. It is tricky, we'll definitely need more clarity. If anything so that at least we can classify things as in like given that those meanings were reverse engineered from practical use and people just took different angles and similar, we just needed to bring back some guidance as in okay, if you made these assumptions just know that those are the things that might go wrong. Like one of the classic most frustrating things I have is that occasionally people, someone discovers that they can reuse the client ID of a native client and then they make a big deal saying, ha, ha, I hacked this. I can look at this and it's a dude is by design but it's surprising how many people think that the client ID is a secret or it's kind of a secret. So yeah, clarity in there would be very useful. Yeah, last thing on 2.1 before we move on because we're almost out of time. We need a second episode. Yeah, maybe we do. Because we covered only half of it so. Yeah, maybe that's a good plan. Do you have any plans to add the device flow to the 2.1 draft? Right now there are no plans to actually like bring it in in the same way that we brought in like actually incorporated a lot of the other documents but there is a reference to it. So in the draft there's like here are the grant types that defined by this draft and here are like you can make extensions to make other grant types. For example, the device grant. So I think that's probably where it's gonna land. I don't think anybody is gonna be very interested in actually bundling it in. Yeah, I'd go even further and say that I think the future of the device grant is not as certain as it was until recently. Like in the second part, like in the second part of the meeting at this point I can maybe mention but then pick it up in the next episode anyway. There are a number of constraints that come with doing these dual device flows that somehow are intrinsically vulnerable to phishing and similar attacks. And during the second meeting that we had that week with the off working group, Peter from Microsoft and Daniel from Yes and Philip from of Zero, all presented various circumstances in which device flow has been abused and some general attacks for which like this thing is intrinsically vulnerable to. And it's kind of common to all the two device flows as they exist today. There is also the open ID PSYOP2 that has some of those characteristics and SIBA might have something to that effect as well. So there are various thoughts about how some of those things can be mitigated. There are new things that are coming up like the multi-device credential in FIDO in which you can use like cloud assisted Bluetooth. And so there is really no user interaction and various enforcing of co-location of the devices in the same volume of space, but the feature is unclear. And so although I don't think device grant is going anywhere because it's so incredibly useful, but it does have limitations and there might be better technology just behind the corner that might make it obsolete. So I think it's why is the course of action not to enshrine it enough to that one because of all of those considerations. I think that's a very good point, especially because again, OAuth 2.1 is supposed to simplify, but also capture the best practice for OAuth. And if there's already concerns that this thing may not be the best, then we shouldn't package into the thing that we're saying is the best. So very good point. And that I think is a fantastic lead-in to the next episode of this because that topic was one of the major blocks of time we spent on the second day. Really fascinating problem. And a lot of the side discussions were about that topic as well. So definitely deserves more time than we can get through right now. So Brian is telling us to step up and do another episode. So we will plan on doing that. It's a pun because one of the things that was presented was the step up of indication that Brian and yours truly call for it. Yep, there it is. So yeah, this was the agenda for the second day. And as you can see, lots of interesting topics there as well. Not to mention the side discussions we had. So yeah, we will do this again in the near future and talk about some other things that happened at the meeting. Yeah, and we should do it actually pretty soon because in a few weeks, then we will have conferences galore because actually the last week of April is IOW in Mountain View. The first week of May is the off-security workshop in Trondheim in Norway. I never know if I'm pronouncing it correctly. But then it's true for pretty much anything that comes out of my mouth. I never know if I'm pronouncing it correctly. And then the week after, there will be the European Identity Conference in Berlin. Oops, I forgot to put that one on here. Bless the three weeks of the pause to try to get some work done. And then there will be RSA in San Francisco in another week and then Identiverse. So I think that our carbon footprint is gonna be interesting. Agreed. My schedule, I'm not going to all of those, but I am going to, I should put EIC on this list, but I am going to the security workshop the week after that is the EIC. So going to that, and then I will also be at Identiverse in June. Yeah, it's gonna be a busy time. And then just right around the corner after that is the next IETF meeting in Philadelphia. So we have to make a lot of progress in three months to have more things to talk about there. Oh yeah. All of us doc authors. Or the opposite. As in so that we can just pick up saying, well, we didn't have time. Let's debate the same stuff again. No, of course, we needed to find a way over to make progress for sure. All right. Sounds like a very good starting point. Unless there's some questions, do we have questions in the pipeline? Nope. Nope, I think we cover everything. So yeah, thanks. Yeah, we didn't talk about Elon buying Twitter at all. So, you know, whatever it is. Oh yeah. All our insider information that we didn't share. All right. I forgot to have the little graphics on the screen. This has been the OAuth happy hour. Thanks for joining. We will post again the link out to the next one on events.oauth.net and the octodev.events website. I'll drop a link to that in the chat. And that is where you'll be able to find us in the future. So thanks everybody and we will see you next time. Ciao a tutti. Ciao, ciao.