 of the OAuth happy hour. I am Aaron Parecki, and I am joined today by a very special guest, Vittorio from Auth0. Hi, Adam. Thanks for having me. So nice to have you here, and very glad we can do this together. So thank you all for joining. As always, feel free to say hi in the chat and post questions there. If you have any about OAuth, we will answer some questions. And we will also just talk about what's been going on in the world of OAuth and some of the latest things in the coming out of the OAuth group, or who knows, it's always a fun time. So yeah, if you are watching this live, go ahead and just say hi in the chat so we know you're there and let us know where you're joining from. So with that, let's just jump right into talking about what's been going on in the OAuth world in the last couple of meetings of the working group. So we usually like to recap about some of the latest meetings. And I think we just had one this week. Is that right? It's hard to get practice. Yeah, it was one in Monday, which I skipped. Yeah, I skipped it because of the thing that is happening between you guys and us guys. I was in an overlap. So I followed all the engineering meetings, but that one, I had to skip. It was the one from Justin and Annabelle, right? That's right. Yeah, the HTTP signatures meeting. Ironically, I also had to skip that one because I had double-booked myself before that one got scheduled. So I guess we won't have much to catch up on about that one, but we can certainly talk about the previous meeting, which I know we were both at. And that one was very interesting. But I had a lot of a chatter afterwards. And I'd say that we can just say one sentence about it. It was about the work that Justin and Annabelle have been doing for HTTP signatures, which is very foundational. And so there was this other draft that Justin was working on, using DIPOP, like having proof of possession for off. And I believe that the main thrust in the kind of meeting was that the higher level draft should be resettled on top of these new HTTP signatures work. I think that was a gist of the entire discussion. Interesting. So OK, yeah, I didn't even realize that. I need to go back and look at the notes on that meeting. But to back up a second, so the idea with HTTP signatures, I'm actually really excited about that idea in general, because I think it'll help a lot of different things, not just the OAuth world. Oh, yeah. That idea of HTTP signatures is about being able to sign a request you're making rather than TLS, which is kind of verifying the underlying connection, but it's more about the response you're getting. So HTTP signatures falls into the category of proof of possession, which we've talked a bit on this stream about different kinds of proof of possession mechanisms. And that is all to fix the concept that bearer tokens can be picked up and stolen and used by somebody who has it, because that's all unique. It's like finding a key on the ground. So there's many different ways to solve that. And I would say none of them are really mature right now. Well, technically, MTLS is our RFC. Then the thing is that it uses PKI, which, as we know, is black magic. So it might not be very viable for the typical use cases of OAuth. But at least in theory, that one is available today. It's available. And I know it is deployed in places, but it also can be a challenge to deploy that one, depending on how you've set things up in your architecture. So that's kind of, it's not just like, oh, we should all go use that now, because there's definitely times where it doesn't make sense. Absolutely. And also, you have to be like, in the case of single-page apps, you might end up having users being prompted for choosing between free GUIDs, what is this stuff. Like they certificates, and the usability just don't mix. So yeah, you're absolutely right. It's not very viable. Yeah, so hopefully HTTP signatures can fill in that missing piece. I'm excited about it, because I think it has an opportunity to be baked into HTTP clients themselves, where you don't even have to really write any code as an app developer. You just hook in your key, your certificate, and the library will just curl. We'll just do the work. That's kind of where I see this going, is that would be the best outcome of that spec, is to have it baked in at the curl level. That would be great, yeah. So yeah, hopefully that comes to fruition soon. I am optimistic about it, and I know Justin and Annabelle have been doing a lot of work to refine that draft and make it make more sense, make it be more flexible. Yeah. Yeah, so okay, so the week before this week at the OAuth meeting, that was about a draft that you are working on, right? Yeah, exactly. That was about a draft whose name is going to change, and it's sad because it was such a good name, but unfortunately the reasons are. The name of the draft is TMIBFF, which basically it's something about the transfer of session state and on delegating some of the work. On the backend for doing some of the OAuth flows. So the name is confusing because with the TMIBFF, which stands for backend for frontend, a lot of people in the industry intend a topology in which the backend does absolutely everything. Like you might have a frontend written in JavaScript and pretty much everything that needs done, which has some like complexity. So obtaining tokens, caching tokens, using tokens for calling APIs is done by the backend. Whereas in this proposal, we ask the backend to take on some of the work, but not all of the work. And so these was something which we had to explain many times and people kept getting it wrong and people even made fun of me with means when I tried to fix it. And so basically the conclusion now is that we just change the name. Because BFF refers to a pattern that is different from the one this is describing. Well, the key thing is it's not a mathematical thing or a taxonomy, which you say, yeah, this thing is a mammal and this is a reptile. It's more of a common usage. And I've seen the BFF used both in the case in which there is a backend and it does some work in cases in which there is absolutely everything done by the backend. And here is the main thrust behind the spec is if you're doing a single page application, so you have your presentation logic on JavaScript. And you don't want, like you do have a backend. It might be easier for you to delegate to the backend all of the traffic between your app and the authorization cell. As in, you might have a backend that takes care of establishing your send-in session so before any off occurs, so just get in your session cookie. And you could have your backend use standard libraries, standard middle words just to get access tokens and the refresh tokens and save them on the service side just like you would do if you'd be writing a web app which is running on the server. And then the idea is that if we find a standard way to expose the tokens that you obtained on the backend and make them available to your front-end. Now, the logic on your front-end can be super simple. You don't need to configure your JavaScript. You don't need it to place client IDs and need to put a redirect to your eyes. You just have these couple of endpoints like one endpoint for saying, hey, I'd like an access token with both scopes and your backend can send you back a cached token or can use a refresh token automatically to get a new one. And then at that point, you can make your call from your JavaScript. And in terms of code, it is a dramatic simplification in respect to what we do today. Even when you are using an SDK and you do code plus peaks in JavaScript, it's still a lot of moving parts. And instead, in this way, it's very simple. There is a catch which is if you have a backend and if you can, it's ideal from a security perspective not to give any tokens at all to your front-end. You could technically do all the calls from your backend. So basically proxy, all your API calls from the backend. And then at that point, you would avoid having tokens in the front-end. You'd only have cookies. The challenge with that, and actually that's what I would recommend if you can do it. If you can do it, do that. And that's typically what people think of as the BFF pattern, right? Correct. Pull proxy to the backend, yeah. You are absolutely right. That's what it turns out. Most people think of that as BFF. But now the trick there is that it's not always possible. Let's say that whenever you facade all your API calls, now suddenly you have a choke point. So if you do Lambda, Lambda has by default up to 1,000 concurrent connections. So if you need it, if you have like many, many users concurrently calling many APIs, even if APIs are not very chatty, of course the chatty are there. Like it's tough. So you end up having to deal with potential latencies, deal with like potential throttling in which some of your calls don't work. And then there are like lots and lots of other considerations. Like your APIs might already have some intelligence for dealing with geolocation like positions from where you're calling. It might have some CDN embedded. So if you are going through a facade, everything that the APIs are already implementing you have to re-implement. And for a lot of people that's really more effort than they're willing to do. Like if I'm doing a single page app, and if I'm just doing the bare minimum, like you come to me once every hour asking me for a token. And then for everything else you just make your calls directly. From the performance perspective, there isn't really no, no comparison. Also in terms of expense. And then finally, there are many other reasons, but one that is particularly important, I believe is resilience in terms of availability. Now if I get a token and I have it in memory in my local service, I can just call for one hour. Even if a backend is for some reason down, I'm the couple from that. Whereas as soon as I have it to route every time, again, I have a problem because now I have a critical component that always needs to be up and might not always be possible. Or like you might be a JavaScript developer with not a lot of knowledge or interest or infrastructure or budget for having a highly available component of a server side. So long story short, this TMI BFF is a solution for dealing with that particular scenario. Right, so it's a standard and a pattern for handing off an access token from a backend to the front end JavaScript so that the JavaScript can make API calls directly avoiding proxies. Correct, correct. And presumably people are doing this already today, right? Which is why you are capturing it as a standard. It's not like a new idea, right? Yes, you are absolutely right. People are already doing it today and they are doing it in a very often, in a very naive way. Let's say that the reason no guidance, like we are a bit responsible for this. Like when we tell people, okay, you have your backend, just use cookies and secure this thing. We don't tell people things like make sure that you place headers in your responses so that cross-site request forgery won't be able to call your APIs on your behalf. Because that's the thing. When you're using cookies and not access tokens, things are easier, but there are different kinds of attacks that normally would not be vulnerable to. Like if I'm using a header with an access token, I don't need to worry about these cross-site. Whereas if I'm using cookies now, people might be able to call my APIs without these particular protections. And the other part is that people do this completely independently. So there is no interoperability. Like if I expose the ability to send the tokens to my front end from my backend, I'll do it on my own or my own endpoints using whatever syntax I want. And so I can't interpret it. Like if instead, if we have a standard, if we have just a little bit of guidance on how to expose these endpoints, how to ask for tokens using the right scoping so that we don't send the tokens without too powerful, then suddenly we can actually imagine a word in which you can mix and match, react, angular, view, anything you want with Java, Node, SV.NET, anything you have on the backend. And given that the touch point between the two is standardized, you can just mix and match SDKs and everything works out in a box. Yeah, that makes a lot of sense. So we have a question here about what is the draft? I'm going to drop the link to that. I think the only draft is on the IETF URL, right? That's the only one we have right now. We have an updated version on our GitHub. And we were going to add some updates reflecting the comments that came out from the last interim. And so the draft gives an idea of where we are right now. But it's 0-0-1. I don't remember if it is. But okay, the GitHub is there. You can see the sausage being made if you check the GitHub and so on. Oh, it looks like there's a version two on GitHub. So that's probably the one in progress. And I think it's a very, let's see. Oh yeah, version 0-1 was the one that was talked about at the meeting. Yeah, because I did the one veteran in which I got rid of all the security claims and we added the headers. Like we had a really good discussion, like a lot of good contributions and Neil Madden from, I believe, Fordrock, he had good suggestions in terms of like an extra security measures. And so in 0-1, we added the right stuff. And that's why we bring these to the group and talk about them and collaborate because yeah, it's hard to get the stuff right even for the experts. So let's collect our knowledge together and capture it in a spec. Yeah, I'm always, always amazed at that. Like for the access token stuff, like I've been dealing with that on a nearly daily basis with hundreds of customers for a decade. And then when we try to standardize it, so many people had different scenarios and really legitimate views and opinions and insights. And sometimes I would admit I'm frustrated with the process because it can be so slow. Or like people chime in without having read and you have to repeat the same conversation. So it's not always great, but it's amazing. Like the safety net of having so many people, so many security experts look at this stuff and really nitpick, it's great. Like it really raises my confidence when I use these kinds of standards because they know that this scrutiny truly was top notch. Yep, yep, that is for sure true. So yeah, what's the next step on that draft then? I said it's going to get renamed. So we didn't rename it. We are now brainstorming. And given that I've seen what happened with Gnab and which naming took a really long time, I prefer to huddle back with Brian, the co-author of a spec, Brian Campbell from Bing Identity and propose a name and then see whether it sticks or not. And then we'll iterate if necessary. The other big change that we're gonna make is today the draft suggests two fixed endpoints for getting tokens and for one part that we didn't touch until now for getting the current state of the session, which means the name of a user that's just authenticated and any information that the application believes is constituting the profile of a user. Today in the current draft we just say, well, every app should have a dot well-known slash BFF token or BFF session info. And the feedback that we got was that it's best to publish metadata instead, which say we support BFF TMI and if you want to talk to us, those are the two routes that my application exposes for token and for metadata. So it's a very good point and I believe it's a big improvement because it will get the right flexibility. It will make it easier for each developer stack to use the correct route shape because we cannot impose in advance what we think is... So everyone will use whatever is compatible with that stack. So that's gonna be one big improvement. And the other is... Yeah, now we're going to the OAuth server metadata document along with the rest of the... Or no, OAuth client or resource server. It's on the resource server. Where is that? It's the resource of the metadata. It actually a resource server? Yes and no, because this is the client. This is one of the messy things all of. Like here we are talking about the client, which is like a cut with a very stretched neck in which the head is in the front end and the body is on the back end. But it's still the same up and it is the client which is counterintuitive because the front end is hitting the back end and the back end is kind of a resource that you are protecting. But you know, resource is the API and there are no APIs in here at least in the context of requesting things. So it's going to be a client metadata which is odd. I don't think we ever had anything of a sort. It's just going to be like an endpoint which says go to root of your web app dot this address and in there you will find the routes that you need for doing this call. So it's a brand new piece of metadata. Okay, yep, yep. And the other big change you proposed you were the first one who brought it up during the call which is today in the current spec when we expose the session info we basically leave it completely to the reader as in you up decide what is a session for you. So if a session is the name or username or preferred name or first name or last name or display name or email or UNC whatever you decide is yours. And we envisioned that people might just dump the content of an ID token which they might have used for signing in before starting this thing and just send it back but we didn't want to impose anything in there. And you and the later others rightfully brought up that if we want to increase interoperability if we would come out with a scheme of attributes that we believe are going to be in common use then we should enshrine that list of attributes in the spec in itself. And that will actually unlock a lot of interesting possibilities because imagine react components, angular components, some components in a specific stack that already expect a certain set of attributes and then you just do data binding. So once again it's something in which out of the box you have something in which some element that you can use in your app that you know it works and you don't need it to do any runtime configuration. It would just work out of the box if we would enshrine those things in there. So I think it's a really good suggestion and we are gonna incorporate that in some form in the next update. I guess the trick will be figuring out what is the correct subset of properties to include so that we're not relying too much on for example, OpenID Connect which is technically not really part of this draft yet. So that's gonna be the trick. I have a trick for that I have a trick which is we can just go to the introspection. Like if you look at the introspection there are like some claims that are recorded in the IANA as part of it. It is technically ITF. So from the legal perspective we don't even need to add the reference. Of course we'll add the reference to OpenID because I'm a big believer in clarity rather than in formalism to the extreme. But even if we just want to stay on formal land we can just get all the stuff or the subset of stuff which comes from that thing. But you are right that there is and also like the recent access token profile also has claims which come right from the introspection response as well. Here I think that we just needed to think a bit about what we really want to expose as in does the front end need to know the issue? Like is it something that you would show? Or like the ACR or the UNR claim? I'm sure that a lot of people wouldn't want them but I don't know if it's just because we are doing a layering violation. Like a lot of people want to know the scopes of access tokens on the front end because they want to modify the UI. So like I'll show you only the parts of the UI that you can actually act on. But I think it's a bit of a layering violation because you do need to change the UI accordingly but you cannot expect as a client to understand the semantic of the scope. You do when you own both ends but in the general case is hard. So here I think that we'll get into these philosophical discussions about what should the front end know? The other thing that is easy to forget even that we are building a contract between the front end and the back end is that those two are owned by the same entity. Like it's still the same app. Front end and back end is the same app of the same developer. So we define a contract to the extent that this will allow us to build SDKs that will talk to each other in the right way without asking the developer to reinvent the wheel. But at runtime, the same developer will have the React SDK and the Node or Java SDK on the server side. So it will, he, she, or them will own the entire thing. So in terms of control, there are lots of things that we don't need to formalize in the UI. So ultimately, like hiding the issue from the front end by not achieve much given that the developer has access to both. So it's kind of like up to them anyway. So I thought I see a lot of interesting discussions on the mailing list. For sure. And that's a good question here in the chat. So even with BFF, a malicious JavaScript can still call your APIs. So again, it boils down to cross-site scripting prevention. It is a good point, yes. The main difference here is that you can get, you can call the API if you are not correctly protected again at XSS, but with BF, TMI, BFF, you can do allegedly worse. You can get the access token that you can use for calling the API. So it's particularly important to try to protect it. Yeah, the way I think of it is with the full BFF proxy pattern, the cross-site scripting attack is limited to telling your JavaScript to make calls to your API and only the things that are exposed by that API. And yes, that is bad, but that's sort of the extent of the damage, whereas if the access token is actually in the JavaScript and can be extracted because of cross-site scripting, then the damage is a lot worse because there's a potential for calling not just the one API, but you could do maybe any method on the API, even if it didn't have a proxy setup for it, or other resource servers as well, other APIs that might also accept that access token. Right, so for that, that's absolutely correct. The main mitigation for these are the scopes. Let's say that hopefully tokens are scoped down so that they don't allow calling many different things. The challenge I'd say is that once I have access token, I can inject it in a farm and start hitting hard as opposed to go through your own stuff, or I can do it even if you pull down your app. So there are a lot of things that are bad if people extract the access token. However, here, the scope limitation, the audience limitation, all these things are all places where, if you followed the correct practice, you should be reasonably covered. Actually, that just reminded me of another thing that I think this draft helps with. So in the pure single-page app model where there is no proxy, no backend, you would need to use refresh tokens in order to not have the user always log in again. And a refresh token in a single-page app is particularly dangerous because if it is stolen through cross-excripting, the refresh token can be used by the attacker off on their own little world, nothing to do with the browser anymore. So with the TMIBFF pattern, there is no reason to have the refresh token in the browser at all anymore, right? You're absolutely right. Refresh tokens stay on the backend and never come back. And one of the advantages of these is given that we obtain these refresh token through classic confidential client flows, we don't need the authorization server to support any of the fancy new things like refresh token rotation, which instead is required by the spec, by the BCP and by 2.1 off 2.1 and similar. If we want to use refresh tokens in the browser, we need to take some extra measures as in either support rotation. So every time you use a refresh token, that refresh token expires and you get a new one that you'll use for the next one. Or you need to use standard constraint, which today is not really viable for anyone. So it's more of a future requirement. And instead of with the TMIBFF model, you can just go to a very old-fashioned authorization server and just not to have to have these advanced capabilities. And here, I don't want to point fingers on anything, but today there are very successful, commercial successful SDKs out there that do get refresh tokens in your JavaScript. And there authorization server don't support rotation. So the only thing that somehow protects you is the expiration of a refresh token, which is still like a, it's not enough. Like for the, according to the B2C, the BCP you must have either rotation or standard constraint. So we can sidestep that by just doing everything on the server side. So that's an added advantage. The other thing that I really like about this is that although your authorization server might have the ability to do a refresh token rotation like of zero does, we added this when the implicit flow went down and with ITP and similar, we just said, you know what, let's buy the wallet. Let's add a refresh token rotation support. But even if we're doing that, even if from day one our SDK supported it. So as a developer, you don't need to do anything. We just switched from implicit to pixie plus rotation and the surface of SDK remain the same. So nothing broke. But the point is there are certain contingencies that can occur from which we cannot protect you from. Like if for example, you try to use the refresh token, the elaborate it does on your behalf and the network somehow fails to send you a response and now you're stuck in no man's land because you have the old refresh token which if your call succeeded is no longer valid. But if your call succeeded, then you should have received a new refresh token which you don't have. So what are you gonna do now? Now we can do things like we can a bit of what the spec says is in like maybe I'll allow you 20 seconds in which you can retry it without penalty. But those are all things that weaken the security of the entire system. And also it's something that eventually I cannot hide because if this call fails again, what am I gonna do? So the SDK cannot protect you from that kind of complexity. But if you do this thing on the server side, then it becomes a non-issue because you just don't need to worry about that. It's just done by the server side. That's true, right? Yeah, the network connection is probably not going to fail between the two servers. It's probably gonna fail between the browser and a server and the refresh token use is all server to server at that point, which is where it should be. That's good. And even if it does fail, you still don't need a rotation because on the server side, you redeem the refresh token with your credentials. So the BCP does not require you to use rotation when you are doing a confidential client flow. So it just becomes a known problem. And then in general, like for me, I come from a background of stuff on the server side, like SAML, WSFed, all of this stuff, but JavaScript has always been a bit of a scary thing for me. And Codepass Pixi works beautifully. It's like, it's a thing of beauty, but it's also like it has so many moving parts. And so building, like packaging that in SDKs, making sure that we do everything right so that we don't transfer complexity to the developer was a lot of work. And so like every complex machinery, I'm always a bit worried that it might, when it fails, it fails in ways which are difficult to troubleshoot. Whereas the server side, you can use like really blunt weapon, all the technology, but very reliable. So it seems like it's just easier to deal with troubleshooting complexity, all of that stuff. So I have a question then. It sounds like there are sort of three different patterns for single page apps. There's the full BFF, full backend proxying everything. There's this middle ground of proxy only the OAuth bits and then hand off the token. And then there's no backend where single page app does the OAuth flow gets tokens. Do you see one or more of those as being better or worse in terms of like, I guess security being the factor we're considering here mixing that with the developer experience, I guess. Do you have a preference there? From a security perspective, I think that the full BFF wins hands down. As in, if you can have everything done by a reliable confidential client that runs on your iron, whether it's like a rented iron or something like that, it doesn't matter. If it's under your control, it's security wise, that's definitely the one that is the most solid. And the other side of the spectrum, I think that when you have to do everything in JavaScript we can do the best we can of the situation but ultimately you are living in a tub together with lots of other tubs with primitives that are designed to be general purpose and not see just security. So that is, I think, the riskiest neighborhood. Like when you walk through that neighborhood make sure that your wallet is in the front pocket pocket rather than the back one. And BFF is somewhat in the middle. The thing that I believe is important for us identity people to remember is that security is of course a paramount but we also cannot ask people to go do groceries in a tank because like security posture has to be as to be proportionate to the actual situation. And the thing that I mentioned earlier about the impact on what it would mean for your backend to take on all of that stuff it truly is important. Like it can, it might be a make or break thing in terms of like if you start fronting users and users fail to use your app and they start signing like no longer using your service there is a real business impact. So there are like, or like if you have multiple APIs that are owned by third parties and those third parties expect those APIs to be called from the browser or like if you have things like, even for security that security is a bigger picture than just where the tokens are think anomaly detection. If I'm making calls from the browser directly from the API and the API has some features in terms of anomaly detection and similar it needs the context that comes from the call directly from where the call is originating. Whereas now if you find that everything through a single place now everything will come from the same IP there are techniques in which you can take some of the context and transfer it but it's fake like machine learning looks at details that as humans we don't see. So if now you have to take selected parameters from the request embed it in a bag and send it through your proxy you are already like cutting out some information that you don't know what you don't know maybe you might be missing some important thing that the anomaly detection needs to have. So security wise it's a really complicated picture. So I think that although I aboard complexity I think that we do need the flexibility to have the right tool for the right situation. And I think that today having only those two without having this middle ground led to a lot of a cottage industry attempts to deal with that situation and I think we can do that. Like if we do guidance if we give to people some libraries we might be able to help them without forcing them to either go on the JavaScript side and then have more risks than necessary or go all the way to the server side and then have to become experts in availability and pay more for doing that proxy stuff. I think that's a good way of saying it. It's a good middle ground and I think going back to what we were saying earlier people are already doing this. It's already a practice people have been doing so it's useful to capture that in a draft in order to document the best way to do that pattern if you are going to be doing that. You know what's ironic? Something like three or four IIWs ago IIW is like the internet identity workshop. She's like this an conference where we crazy people meet twice a year. I remember it was a presentation in which someone was suggesting exactly this pattern and I was in that little crowd the most vocal opponent saying, no, I don't do it. How dare you doing that? Now you have an access token which we obtained there with the highest form of client, the confidential client and now we are giving it to the lowest rank of clients they jar as good client. And so now as an API if I receive it is access token and it varies anything that access token that says I have been obtained by like pure organic grass fed clients and now you might make authorization decisions based on that based on the nature of the client which would be wrong because now the client is the color. In fact, I was pushing things like every client should ask for its own tokens so that we avoid doing this kind of stuff but then during the discussions we had for the access token profile for JWT we had like months of discussions about should we have something in the access token which says whether this token was obtained using confidential client flow or like a specific grant and similar and I remember it's also talking with people from OCTA like from with Carl McGuinness which is like an amazing expert in this space and we were talking about like what should we do? And like one suggestion was let's place the grant that was used directly in the access token and others pushed back saying no it's an implementation detail which the resource server shouldn't know but anyway, long story short turns out that most people didn't care enough about this difference to the point of actually placing anything in the token standardizing that these happen. Interesting. So from these I derive that there is not enough interest for keeping these two things separated and so we just added in the security consideration something like beware if your application if your resource server is actually looking for that info and making authorization decisions this pattern is not for you because you will give a false impression or from where the token is coming from but that's it like that to me that was my main the main thing that made me think this pattern is bad but as soon as that was cleared then I changed my mind and I said, okay if no one cares about that then yeah we should describe this pattern and help people do it in the right way even that they are already doing it. That makes sense the I could see that being done in a resource server of acting differently based on how secure it thinks the client is but there's multiple ways to capture your level of confidence in a client and doesn't have to be based on grant type or client type. So you could still use this pattern and make decisions about your confidence in the client. You would just have to do it yourself like add something into your token that describes your confidence in the client and that's fine, you can do that. Like when your developers go register this app set this app up, you could make them tell you like tell the authorization server, yes I am making a client that is going to be using TMIBFF and that can be a flag somewhere that gets put into the access token later if you really need to know it. They are absolutely correct. I think what's interesting is that nobody seemed to care. Yeah, I think that but then ultimately what happens is that people occasionally come to me for other reasons, say I would like X and every time I punt them to a discussion saying yeah, you know what you want this info but it didn't cross the threshold of we needed to standardize it. So you can always have it in your system and also if someone owns end to end like if I own my authorization server or my tenant and I own my resource server and I know that when I register an app in there I can say this app is a confidential client and so there is no other way of getting tokens for that thing. If I control both sides then I don't need anything in the token. I just know it out of bed. The thing is that even when your company owns both your API and your tenant you might still have multiple developers working on these and so someone unbexed on to you might go and change this thing on the other side. So it's always safer and the freshest information to look at what's inside the token because it talks about this particular transaction rather than, but exactly like you described if a stack believes that this is important nothing prevents anyone to add this information on their own they just want to interpret it. Right, and it doesn't seem like interoperability of that piece of information is terribly important either because it is probably tied in to a lot of your internal logic anyway that isn't exposed. Yeah, that's the truth. Yeah, I think it's always a trick of like what should go into a spec of like how much do you put in because you don't want to prescribe too much implementation detail. You mainly want to capture things for interrupt purposes but at the same time you want to call out security issues that people should be aware of within their implementations. So that's always a tough line to ride. It's super tough and the only way to navigate it is by living. Que sera sera. In fact, to me the glares and closest to me example is the JWT profile for access tokens in which I remember when Dick Hart showed up with the off wrap and discussed these again at the one IAW and off wrap had both the protocol which ended up being off to the top with this idea of separating the acquisition server and similar but it also had a token format and he deliciously outdated any file with a square brackets instead of JSON but it had a file format and the file format was rejected because back when they turned the off wrap into off to the top they said, well, we like the separation of concern we like the ability to the standalone of a position server but in the vast majority of cases the resource server and the position server will be collocated. So we don't need any format. Also I think that there was a bit of allergy of the sum all words like you don't have a classic teenager that wants to rebel and not having the format was a form of like, no, I do what I want, this is a freedom and it worked really well for a lot of things but then introspection had to come out because like, okay, how do I validate my tokens? Once you do want to separate those two then you need something to do. You need a way and a lot of people given that introspection doesn't scale to hyperscale like if you are a big cloud provider having people phone home every time is really hard. So organically lots and lots of people just moved to use JWT as a way of expressing access tokens because it just made sense. And so parallel evolution, lots of people did that. And then that is one aspect that now we have to somewhat remediate because we now, like we now created the profile and it's going toward the RFC, which is great but there are so many implementations out there. So I have under no illusion that people will jump on it right away but think of all the challenges that we had for like even how you express the scope claim. I was just like looking at one GitHub issue in which people were saying like, why isn't my SDK automatically doing checks? Can I just decorate my API and say, you can call this thing only if you have the scope? And the people were saying, well, we couldn't do this until now because we don't know where the scope information is going to be in the token. Some people call it SCP, some other call it scope, some other call it scopes. Sometimes they are blank limited lists. Some other times they're adjacent arrays. So if we would have had some degree of specificity in offer ready, then we would not be in this situation. But again, we could not predict these so many years ago. And so we do what we can to course correct. Yeah, that makes sense. It's a tough one, it's a tough one. But no, I'm excited about how that stuff is moving. And I think this TMI BFF or whatever will be called. I think it is a useful pattern to document. I'm going to have to now update my browser-based app sort of overarching document to include that pattern with a reference, I think, because the one it's, I'm due for an update. It's actually technically expired right now. I'll hopefully be pushing out an update on it next week or so, but that draft is supposed to capture best practices for single page apps. And one of the things that it does is listing out these patterns and these options, which reminds me that there's a fourth pattern, which is possible, although probably not very common in the end, which is you actually have even more shortcuts you can take if your single page app client and the resource server actually share a domain, share a common host name, because you can do a lot of shortcuts with cookies and things like that. I would argue that that particular thing is a special case of a TMI BFF because it's basically the same case in which you never ask for access tokens, but you only use the session info. Because basically this was one of the problems we always had. Like when someone lands on our portal, our is in of zero. Someone lands in there and you try to guide them through the myriad of options that people can follow for these kinds of things. And when you ask people, do you have a single page app, you get some information about, as you rightfully accounted earlier, we have like these three different patterns. And in particular, one of the most common patterns is, I'm not calling an API because it exposes a functionality I need. I'm calling an API only for interaction dynamics. As in, I want to drive a visa from JavaScript because it's a better, more responsive experience, but the API is purely there for presentation purposes. And so at that point, you just use the cookie and you're done. Like you just do a classic server side, redirect-based sign-in and then all the traffic between your JavaScript and your backend can be done with the cookie. Now, there are two challenges. One is easy, one is less easy. The one challenge is, is every time you call and you, is for example, say that your session expires, if you're making an adjox call, you get a three or two and you don't know what to do with a three or two because it's, because it's adjox. So you just need the, and this one is the easy one. You just have something which triggers full page redirect and you'll be done. The other one is where the TMI BFF comes in, which is if you're doing the cookie, the cookie buries in it all the information about authentication. So when your frontend wants to show stuff, as in hello, Aaron, and anything that is part of your context, again, you need to find a way of communicating between your frontend code and your backend this information. And so if you take a library that supports TMI BFF, now you can just do the send in as usual, use cookies for calling your own backend without ever getting an access token, and any machinery which is investigated for asking for a session is taken care of for you. So you don't need to invent the mechanism for exposing your frontend, the content, which is one of the reasons for which we insisted in placing this thing in there. Yeah, that's interesting. And actually, I know that we are about to run out of time. I wanted to mention one other thing about the TMI BFF, which is not security related, but I believe I'm probably like too stereotyped, but I think it can be a bit of a game changer. One of the big issues we have in our space is testing. Like the fact that when you have an authorization server in the picture, you need to take steps to actually interactively go there, do consent. It's hard to do testing, which is one of the reasons for which people push for still using resource owner password grant because it's quintessential testing ground. Now, when you do TMI BFF, nothing prevents you from seeding your token store on the server side with test stuff without really needing to go anywhere. So say that you want to test your particular APIs and similar, all you need to do really is to have a viable something that you return, and you might have, I don't know, a test mode in your SDK. And then at that point, you can do sign in locally. You don't need to do any redirect as long as you have a cookie between the two from where the cookie came from doesn't matter because you are testing. And you can just have a fixed token that I send you back always the same token that you can do for a cost. Or you can go even farther and just like create a token with a test key, which all leaves locally. All the JWT libraries today are capable of issuing JWTs. So having a test mode with this thing would be absolutely trivial. So if people just want to kick the tires that they wanted to either do automated testing or they want to develop without worrying about connecting to a specific authorization server, it all becomes ultra easy without having to create an entire local authorization server. Because of the alternative to ROPG is often I need an entire little authorization server that runs in line with my app, which of course is very hard. It's easy to abuse because people might start using it also in production. Whereas if I'm just sending you always the same token, it clearly test. You can't really. So I guess that works because, it works because the TMI BFF, it kind of puts the authorization server out of the picture, right? It's no longer part of the client to resource server negotiation. Yeah, thank you for summarizing. I know I talk too much. You are synthesized it in the absolute perfect way. Yes, that is the core. Yeah, okay. That's, yeah, that's really interesting. I like that. And yeah, huge, huge issue for people. I know I get a lot of questions about that all the time and it's, there's not a good short answer to it. Yeah, it's like in all the years that I've been doing this, this has always been an issue. And it's not for average apology, but for this one in particular, I think that we can really like help developers whip something together very quickly so that eventually you still need to worry about like talk to your for the server, making sure that you're configured the right claims but the user is doing them the same the right way. So it doesn't make this thing go away, but at least it doesn't compound them with testing your actual app and just the API calls. So you can do, you can divide and conquer basically. Makes sense. Great. Well, that's a good note to end on. Let's go ahead and wrap it up here. Thanks everybody for joining and thanks for all the questions. And yeah, we should keep doing this again. I'm happy to have Vittorio on here now. And as always, the schedule of these live streams, they're at octadev.events, that is where we post these shows as well as workshops that I do and maybe we'll get some of Vittorio's events on there too. I can always see what's coming up next on that website. Subscribe to hear on YouTube or Twitch wherever you're watching this from. And yeah, hopefully we will be back soon. We haven't decided the schedule of the next happy hours now that the interim meetings have ended. We were doing them every week during the run of the six or seven weeks of interim meetings. So we'll probably get back to our every other week schedule. So make sure you are subscribed so you can see them when they come up. And yeah, thanks Vittorio for joining today and this was a lot of fun. Thank you for having me. It was a lot of fun. All right, see you next time.