 Hello, and welcome back to another round of OAuth Happy Hour. I am Aaron Parecki, and we've got Lee joining me here as well. So if you are joining us live, either on YouTube or Twitch or on octodev.events, then go ahead and say something in the chat so we know you're here. And feel free to drop in questions at any point in the chat, and we will keep an eye on that. Otherwise, we are going to be talking about what's going on in the world of OAuth and, yeah, anything else? Anything else you want to talk about, really? I want to talk about whether or not Thanos was right. Is that OK? We could go completely off topic if you wanted to. But yeah, why don't we start with what's the latest and greatest in the OAuth group, since we are still in the middle of our very long series of interim meetings? This is the IETF interim meetings. We did it that way in the OAuth group instead of trying to cram everything into the IETF week. A lot of other groups were doing just, well, normally in person, we would do a bunch of meetings, several hours crammed into a week. And that does not really make sense when everything is virtual anyway. So we split it out over six weeks. And we just came up to the end of the series. And the last meeting this week was about a brand new spec, which should be fun to talk about. Hi, Rasmus from Denmark. Joining us on here. Thank you for joining. So yeah, let's start with that. The spec is called. We'll drop a link to it in the chat as well. The spec that we talked about was called token mediating and session information back-end for front-end. Abbreviated conveniently to TMIBFF. And they just left out the session S just so that they could get the acronym TMIBFF. But basically, this is a pattern for single-page apps. So I can go ahead and put it up on the screen. Shout out to Mohamed joining us on YouTube. Oh, nice. Thanks for joining. So this is a pattern for single-page apps to do an OAuth flow where you think there's a diagram in here, which helps what I'm looking for. Here it is. The idea is for when you're building a single-page app and you want your single-page app to talk to an API directly, but you don't want your single-page app to do the OAuth flow. That makes sense. So yeah, that makes sense. So there's several different options you have when you're building a single-page app. You can make your single-page app be the OAuth client where you do everything in the browser. And that's where you would be doing pixie to protect the flow. But then the tokens end up being delivered to the app. And you hold an nJava script in, for example, local storage. And then you just send those back to the API. And the API's only job is really to just validate the token and make sure it can do whatever it needs to do. Yeah, exactly. And that's a pretty normal pattern. But the downside of that is your single-page apps are holding onto tokens and have stored them somewhere accessible to actually use them. And you'll find a lot of people referencing online never store tokens in local storage. That's bad. And nothing is ever that absolute. But yes, there are risks to that. So if you are not OK with those risks, if you're not willing to accept those risks, then you should look for other patterns. And one of the other patterns available is, well, it's actually in the other spec. Let me pull that one up too. The browser-based apps spec, this one, is in diagram in here as well. It talks about this one. This is JavaScript apps with a back end. So in this model, the browser does never seize the access token. Basically, the browser does the OAuth flow, but the tokens get delivered to the app's back end server. And the back end server is the one that's holding onto the tokens. And the communication between the browser and the back end server is just like a session cookie. So it's not the token itself. And this is sort of the other extreme where the browser's literally never get the access token. And that means that the browser can't talk to the API directly because it doesn't have the access token. So instead, if the browser wants to go make an API request, it has to go through its back end server so that the back end server can use the access token to make the API request. So that's the other extreme of that. And the problem with this is that you have to have a back end server that's in the middle of all of your API requests. And in some cases, that's not very fun to deal with either. So those are the two patterns described here. And this is the new one. This TMIBFF is a sort of middle ground where the back end is the one that's doing the OAuth flow. So you don't actually do an OAuth flow from the front end. You do it all from your back end. But then the front end can go and ask to get the token. And it does actually get the token. And it can store it. And then it can go and make API requests. And so it's kind of a weird middle ground. But one of the potential advantages to this is that if you are a resource server and the back end server have a common domain, if they're on the same like .com, then you don't actually have the front end to store the token. It can be put in an HTTP-only cookie. So the JavaScript can't access it. But the browser will include it in this request because it's in a cookie if these two things are on the same domain. And that's kind of one of the neat little things about this trick is you get the option of that pattern. And you get the security benefits of the browser never actually having access to the token. But you also don't need all your API requests to go through your back end. Nice. So that's that middle ground, middle pattern, which is potentially very useful. And one of the reasons that this is being brought up in the group is because people are doing this today, which is how a lot of these specs work. It's not like someone's coming up with all this stuff from scratch. This is being done today. And the idea with turning it into a spec is to formalize the pattern. And actually say, OK, if you are doing this, then here's other things you might need to keep in mind about if you're doing this. And here's the best way to do it. Here's the most secure way to do it. Nice. Yeah, I guess I think about it a lot like test coverage in your software projects. Like, if I'm writing medical software, then yeah, 99% test coverage is probably a good idea. Whereas if I'm writing Twitter, I probably don't need 100% test coverage. You know what I mean? I'm willing to risk a little bit of a bug here or there. But if I'm writing medical software, I'm not really willing to risk a bug. Yep, yep. Hello, Tamarius C, joining us on YouTube. Yes, hello. So yeah, that draft was brought up in the group and received pretty well. I think people generally understood the problem being solved and understanding also the limits of this approach. None of these are like foolproof, right? I guess I should probably also mention the downside of this approach, which is the token still is in the browser, even though it's not accessible by JavaScript. So the user, if you're using, for example, JSON Web tokens as access tokens and there's actual data in the token, the user can still read that data. You haven't moved the token completely out of the browser. And there's still a risk of the token being stolen, not through cross-excripting attacks, but through lower-level attacks. Like if malware is on the machine, that token is on that hard drive somewhere. And it could still be stolen that way, which is different than stealing a session cookie, which can be used to tell your back-end to make an API request. They're both still bad, but they're different kinds of threats. So nothing is perfect, browsers are terrible. We should never use them. But these are three different patterns that you have in order to arrange the different pieces involved in the flow. Yeah, security is a lot like driving in your car. You can lock your car. It's great. But if you've got an iPad sitting on the front seat, it's a pretty good chance somebody's going to break that window. So the best thing to do is just not leave anything out that looks like it's worth breaking the window. What does it not make it worth the attacker's time? Yeah, exactly. Yeah, it's also curious the same way. Nothing is 100% secure. So it's just a matter of making it not worth the attacker's time to go through the trouble of pulling off whatever attack you're talking about. And that's true with hashing algorithms, too. Hashing algorithms are the basis of pretty much every cryptography thing that we deal with these days. And there is no perfect hashing algorithm. It's all potentially reversible or crackable. So it's just a matter of choosing a good enough one that is not likely to be reverse engineered soon. Yeah. So that was a good session. I think the actual wrap up of that call was to move the discussion to the mailing list and do the actual call for adoption, which basically means that the call for adoption is when a new document that exists as what we call an individual draft from somebody's idea gets officially adopted by the working group. So we didn't do that call for adoption on the phone because you want the record of that on the email list. But I expect to see that very soon. And I think it will get support because everybody seemed to generally understand the problem. There were a couple of specifics that are going to be discussed and worked out. But several people, myself included, brought up some details about like, oh, well, what about this particular edge case? Or this thing should be named differently or whatever. But those are the details that get worked out later. It wasn't anything fundamentally. It wasn't like anybody had fundamental issues with the mechanism being described. Nice. Yeah. So how does the OAuth working group kind of work? Do you guys scour the internet for the way? I mean, this thing you said came from the way people were already doing it. Was that because people reported to you that's the way they were doing it? Or were you guys just scraping the internet and found these things? Yeah. So this is every document in the working group ends up being led by one or more people. Usually, actually, just one person is usually leading a document. And it's because they have a particular interest in it for some reason. So I'm leading the OAuth 2.1 document. I have two co-editors as well. We're doing a lot of the work. But I was like, brought it up with the group and said, hey, I think we should do this thing. And my interest in that is to help developers have an easier time understanding OAuth and giving everybody a better place to start from in the future. And that's the reason I'm doing it. Sometimes people will lead a document because it is a practice that their company is doing that they would like to bring to the broader group or formalize or whatever that is. And that's actually where the JSON Web token access token spec came from. It was several people who were already using JSON Web token for access tokens wanting to formalize that pattern. And then this one is similar. This is actually being led by Vittorio from OAuth 0. And he has been working with a lot of customers who are doing this. So that's where that experience came from. He's seen this in the wild. He's been working with people who've done this pattern. So that's, again, the reason that he's the one leading this document. Nice. And can anyone lead a document? Or do you have to be part of the working group? You have to be part of the working group, but anybody can be part of the working group. And there is no actual process, really. Basically, you can just show up. So there's no fees. There's no official application form. You just can join the mailing list. I guess that's the application form. You join the mailing list, and you can write something on the list. And in practice, what ends up happening is you want to, like any political organization, you want to get to know people, become friends, get support for things before you just show up and start talking about stuff. So technically, you could just tomorrow write an email to the list and say, hey, I've got the spec. I think it would be useful in the OAuth group. You would be unlikely to get a lot of support if you are brand new to the group and just show up and start talking about whatever document you've brought in. However, if you find out who's doing things that are similar, who might have common interests as you or motivations as you for whatever you're doing, talk to them first, sort of off the list. This is what a lot of the meetings are good for. And they were in person of you could come to the meetings, give feedback on other people's documents, contribute that way, and then sort of in the breaks and stuff, mention that you might have this other problem that you've been working on a solution for and sort of feel things out, feel out who's interested in it. Then when you are ready to actually bring it up in the official channels, you would already have more support for that document. And that's much more often how that ends up happening. So I think Marius has a question that's going to split across two or three comments, but. Yeah, OK. So we have a similar approach in an environment where the front and the back end are not built by the same person, but instead of returning the token whenever it's requested, we return it in cookies for each request. Also, we recommend injecting a JS script for requesting tokens in the background. Is this approach secure? That actually sounds a lot like what's being described here. So if we look at, let me share just that browser, this one. So if we look at the document, I believe that pattern is actually described here. So requesting access tokens, front end generates the request into the token endpoint. Actually, one of the reasons for having this spec in the first place is specifically for this use case of you have different teams building the different pieces, right? Because now this is a contract that both teams can read to make sure they're doing the right thing and you don't have to worry about miscommunications. This also describes what happens if the back end knows the access token is expired. It can go and refresh the token and return a new one. Back end returns it to the front end here and that's where I wanted to look for the response. Oh, this does describe just in the JSON response rather than in a cookie. Now I thought this described, I thought this described the cookie option, but I don't see it now. So I would say that maybe I was just thinking about something that was being discussed in the call, not in the document. But yeah, if you are, instead of returning the access token in the JSON response, if you're just returning it in the cookie, that's even like one step more secure than having the JavaScript actually have access to the access token, right? So that is, yeah. The main difference is in the proposal, the token is stored in memory. In our case, it's in cookies. And if you're storing it in cookies where the JavaScript wrote the cookie, that's not really any different anymore because if your JavaScript sets the cookie value, your JavaScript has access to the cookie, which means it can be stolen through cross-site scripting attacks the same way as it's storing it any other way. The back end is returning the cookie. Exactly. If the back end sets the cookie in the response, then the front end never actually has access to the access token, which now I'm super confused because I swear that that pattern was talked about on the call and I can't find it. And the minutes are a little bit, I think there's some formatting issues going on with the minutes. These were supposed to be new lines. It must be like parsing it as HTML or markdown or something, a bit hard to read. So, okay, so in Marius, in your case, if the back end sets the cookie but your JavaScript also has access to it, then that's not really any different than the browser getting the JSON response and storing the token itself, either in a cookie or in local storage or whatever. The difference you're looking for is whether a cross-site scripting attack can give the attacker access to the actual access token itself or if the damage of a cross-site scripting attack is limited to the attacker being able to tell the app to make an API request, which are very different problems because if you can just tell the back end or whatever or even like a service worker or something to make an API request, you're limited to only getting data that your application could have gotten itself, whereas getting access to the access token itself and potentially refresh token depending on how you're doing it, a lot more implications there because you might be able to use the access token at different APIs if it's accepted at different resource servers and the refresh token will let the attacker get new access tokens, which would also be a lot worse. So yeah, it sounds like if your JavaScript code can ever actually get the value of the access token then it doesn't really matter in the end how your JavaScript is storing that value. Yeah, I guess I was under the impression that the reason that you use cookies is so that JavaScript doesn't have access to the token so that cross-site is scripting can't get access to a token, right? Yep, that would be the main benefit of using specifically cookies as a storage for the token. I say storage in quotes because it's not really a storage API in the sense of I can put stuff there and get stuff there, but it is the thing that's holding onto the token. Yeah, now that I'm thinking about this and I can't find it in the minutes, I'm gonna, that is a thing I'm gonna bring up for this relative to this document when it's being discussed in the group is it worth describing that cookie pattern as well for the maybe edge case of when your resource server and backend server do share domains so they can share cookies. That seems like it'd be a useful pattern to describe as well. Yeah, do we have any other questions? I see Bjorn on YouTube, hello, Bjorn. Thanks for joining. I'm still really kind of giggling internally about the TMI BFF, but... Yes. Oh, Bjorn does have a question. Cool, does the refresh token grant generate a new refresh token? It's not a new question at all. So the in core OAuth by itself, the refresh token grant does not specify whether or not a new refresh token is returned because it was meant to let the OAuth server decide whether to do that or not on its own. And so in practice, this has meant that OAuth servers do different things for various reasons. And in a recent document in security best current practice, the there is new guidance specifically about this particular thing, which is if you aren't using a client secret, if you're a JavaScript app or a mobile app, then you should make refresh tokens one time use. And the reason for that is because without a client secret, the refresh token is the only thing you need in order to be able to get new access tokens indefinitely. So whatever the lifetime of the refresh token is, it can be used to get new access tokens. So if you can steal a refresh token, then the real app can get new access tokens and the attacker can get new access tokens using just the refresh token. And that request looks identical from the two different pieces because that's the only thing in the request. It's just grant type refresh token, refresh token itself. And the OAuth server would never know that it's handing new access tokens to an attacker. So without a client secret, that's kind of the situation that you're in, which has always been true. And that's why you've often seen guidance like don't issue refresh tokens to JavaScript apps because JavaScript apps are more likely to have cross-site scripting attacks. And if a cross-site scripting attack gives the attacker access to the refresh token, then that's very bad. But if you are doing one time use refresh tokens, that's a whole different story now because with one time use refresh tokens, if an attacker can steal one and then go and use it, but the real app has that refresh token also and then goes and uses it, that token's been used twice, which is a problem and the OAuth server can say, oh, I see something bad has happened. And then it can kill the access tokens, kill the refresh token and make the user log in again. So you can detect the attack and because of that, because you can detect the attack, it then means that the risk of issuing refresh tokens to mobile apps and single page apps is now a lot different because you can tell if something bad has happened and stop it. So the current recommendation now is for mobile apps and single page apps, you should do one time use refresh tokens where every time it's used, you get a new one back. And you don't really need to do that if you have a client secret or any form of client authentication because it doesn't really get you any benefit at that point. So how long it takes until OAuth servers catch up with that guidance is a different story, but Okta is pushing that out very, very soon. It's already available in some, I think I never remember the actual names of the different EA early access. I think it's whatever's after that. Isn't there one after that before it's, actually public? I can't remember. Anyway, you can get it turned on right now. So when I go and use my test org for demonstrating, there's a checkbox there of like, let this app use refresh tokens and then make these refresh tokens one time use only. And it'll be available to everybody very, very soon. I don't remember the exact date, but it's coming soon in Okta. And I know other servers either have had it already for a while or just always worked that way or are adding it. So we are seeing all that happening, which is great for security. Nice. Yeah. Yeah, so that's what's going on with the OAuth meetings. Oh, cool. Marius has another question about this. If the refresh token is already used, what is recommended? So yeah, I believe this is actually in, I believe this is actually described in the security best current practice spec. Let me pull that up. This is loading very slowly. Come on. There it is. So let's see, where is that? 2.2.2. Oh, did you see it? Token replay prevention. Yes, thank you. Sender constrained, meaning it uses some form of client authentication. Or refresh token rotation. Oh, described in 4.1.3. So okay, this is describing all the motivation for this. Recommendations. Okay, if refresh tokens are using that same scope, it was issued before, refresh token rotation. Previous refresh token is invalidated, but information is retained at. If a refresh token is compromised and subsequently used by both, one will present the invalidated refresh token, which will inform the authorization server of a breach. The authorization server cannot determine which party submitted the invalid refresh token. I feel like that's not worded correctly, because it's saying it will, it should be a like a monster or something. Or should, yeah. So yeah, the idea is that it should revoke the active one. So if the token ends up being stolen, and it's used once, then it should, and it used again, it should revoke that one. Even though that's gonna inconvenience a legitimate client and force them to log in again, it's better than allowing the attack to continue. Exactly, yep. What's not in here is, well, this is good. I should be taking notes. What's not in here, which I actually thought was here, was whether it should also revoke access tokens that were issued to that refresh token. Does that make sense? So if you have to like, it's like a tree diagram, really. If the, if a refresh token is used, then it'll get back a new refresh token and a new access token. And if that same, if that first refresh token is used again, it can not issue an access token and it can revoke that refresh token, but there's still this access token over here that was issued in response to that refresh token a while ago. Should that one get revoked as well? And if so, should it also revoke like more, the other access tokens down the chain, right? Because that new refresh token may also have been used. So the question is like, how far up the chain do you go in revoking refresh tokens and access tokens? And yeah. And so since you can't know which one's the legitimate actor and which one's the attacking actor, bad actor, then, it seems like the answer is all the way up the chain. I would tend to agree. Basically all the way up the chain to, when you detected that split. Right, because that's when the compromise happened. If you imagine a tree diagram of like, the user logged in is the first thing that happened and then there's an access token and a refresh token issued. That refresh token may then have resulted in a new access token and a new refresh token. And that refresh token may resulted in a new access token and a new refresh token. So you got this whole chain going on and as soon as one of those refresh tokens is used twice, basically everything below that in the tree should be considered compromised and lop off that whole tree. And you technically don't need, like everything above it doesn't have to be considered compromised, but the only recourse is for the user to login again, which would start a new tree. So maybe it does make sense to just lop off the entire branch. Yeah, but if the refresh token is stolen by a bad actor from the legitimate user and used first to get a new access token and then the legitimate user requests a new access token, a new refresh token, then not only should you invalidate all those all the way up the chain, but include that one. Cause you can't tell who was the first one of the two, right, to have used the refresh token. Yeah, exactly. This is, I kind of wish I had a way to draw this right now. I don't think I do in this browser. I can only share stuff in Chrome. But yeah, I'm imagining this picture in my head that I want to draw, but I don't think it's gonna happen. But yeah, so there's a little bit of this is missing apparently from the details are missing from this draft, which I should bring up. Should the user be notified about this possible bad actor? I think this will be heavily dependent on what exactly you're building because it may or may not be appropriate to tell the user, right? If it's like, if it's like, it's possible that the attacker has actually accessed data cause you can tell if an access token has been used, right? So you could tell if any of the potentially compromised access tokens have been used or not. So you could use that to determine whether or not to actually tell the user something has happened. But it also may not always be appropriate to do that cause it might look too scary, right? I could see that in some cases being like, it's, it would be more confusing to the user to have to explain what exactly went wrong. Yeah, I mean, it sounds scary to say, hey, somebody stole your access token was able to get access to your account, but they didn't do anything. Yeah, right. They never made a request before we shut it down. So, yeah, don't worry about it, but I just wanted to freak you out a little bit first. I could see maybe in, so you know how like in Google you can go and look at your security audit where it'll tell you like, here's all the different devices that are logged in right now. Here's all the applications you've authorized. Here's the last time they've been active. I actually really like that one where it'll tell you this app hasn't even used and hasn't made any API requests since 2016. And I can be like, cool, I'm just gonna revoke it just to make sure that if it ever gets compromised it can't do anything with my account in the future. I could see maybe having something in that kind of screen where it's like a little warning of like, this app, something fishy went wrong. We don't quite know what, but. Look over it and see if it seems suspicious to you or. Right, right. Yeah, maybe you want to review this because you see that with like, you know, it'll say like you're logged in from these four browsers in these locations. Maybe you should review these because this one looks suspicious. But that often miscategorizes things for me because like my home internet, some people think it's in Washington even though it's in Portland because of the IP address that I have for some reason got reassigned because the ISP operates in both states. So sometimes when I log into Okta, it'll be like, hey, do you want to prove this login from Washington? And I'm like, I do because I know this is me even though my IP address is getting categorized wrong. Or if you go into your Google list of apps, it'll be like, oh, you're logged in from these five locations and one of them happens to be your cell phone, which again, IP address on cell phones get reassigned all the time. So those databases get out of date. So it's not like a, it's not like a, you know, right alert, you've been hacked. It's a, this thing looks like it potentially is suspicious. Yeah. Oh yes, from a comment from XS RF, great username. This is a very good point. So depending on the single page app, a simple duplicate tab, open a new tab in the browser could also trigger a token being used twice. It's not always bad guy. That is very true. And it's not even, yeah, so it's opening in a new tab. It's also potential network glitches. So if the, like especially on mobile, if you make a request with a refresh token on mobile and then the network request fails partway through and the app tries it again, the request may have made it to the server, but the response may not have made it back. So the server will see it twice, even though the app didn't actually get the response first. So the app didn't get the new refresh token. So it's still using the old refresh token. So it would think it's been used twice by a bad actor, which even though it wasn't, but so the safest thing to do in all of those cases is to invalidate everything and get the user to log in again. That's the safe thing. That's not always the most desirable thing because in some of these situations, depending on the app, like if you expect people to use it in two tabs or expect people to be using it while they're going through tunnels on a phone, that might be a very common occurrence. So actually in Octo, one of the things they did is there's a setting next to the checkbox of, do you want to allow, do you want a window in which it's okay to use, to reuse refresh tokens? So you can set it to like 30 seconds. So you can say like, it's not gonna be considered invalid if the same refresh token is used twice within 30 seconds. And that would allow for brief network glitches to not affect the state of it, right? And you can choose what interval you want for that. And which, again, it's like choose where you want on the security and usability spectrum because the only secure thing to do is to lop off the entire tree of those tokens, but that may not be a good user experience because you would have the user log in again from scratch, which might be a little bit too aggressive of a recovery path in some cases. Yeah. I mean, if you're talking about the difference between, again, it goes back to what's your risk aversion level? Are you a banking app or your Facebook, you know? Exactly, yeah. And even if you are a banking app within the app, what are you trying to do? Because if you are just trying to do a like, find a nearby ATM versus do a bank transfer, those are two completely different API operations with very different threat profiles and you can decide how to respond to each of them, you know, in context. Right on, right on. Yeah, fun stuff. Anybody else have any questions? XSRF was coming from Twitch too, that's nice. I'd like to see that. Nice, yeah, cool. So, let's see, what else? We've got a, we don't have another interim meeting set up for the OAuth group yet. We were going to have one about OAuth 2.1 next week, but myself and the other editors got behind, it did not have time to publish the new document. So we had to bump that a few weeks. So that'll be probably later in May, we'll end up doing that interim meeting to talk again about OAuth 2.1. That'll be the second interim for that document in this between times, between the IETF meetings. And yeah, just a lot of document editing to do on that before we're ready to bring it back to the group. And then in July, there's the next IETF meeting, which is where we would normally have in-person sessions. That one is still virtual. That one is not going back to in-person yet. And we will probably, for the OAuth group, probably also not have meetings during it. We can probably do the same series of weekly, weekly sessions instead, which seems to be working out actually pretty well. It gives each person more time to, because you get the full hour to talk about your draft. And then it also means that like, the rest of the group doesn't have to cram and read six different specs all before the week of meetings. You can kind of stagger it and catch up on one spec at a time. It's being discussed. So I like that format a lot better. It's been working out really well. Yeah, over on the Octa events side of things, I have some cool stuff coming up as well. We've got, just did this, I just got off of this session, which was talking about what's new in OAuth and OpenD Connect. It looks like it's after, even though it's before because time zones. And then next week I'm doing a workshop, which is public, you should definitely come. Hands-on workshop for OAuth and OpenD Connect, we're going to do a guided walkthrough of OAuth, where I am leading people through building, or not building it up, but rather doing an OAuth flow step-by-step. So we won't be writing any code, we're going to be doing it all using HTTP. So building URLs, making post requests. You can use Curl, you can use Postman, whatever you want. You can write code if you really want to. All you need to do in order to do an OAuth flow is be able to build a URL and then make a post request. So we'll show a step-by-step how that works. Yeah. You have a question from Julie. Oh, look at that. Okay, let's say we have a single page app backed by an API server. Yep. Token-based authentication. Now you want to add OAuth 2 on top. Is there best practice for storing access tokens in this scenario? So if you are talking about adding OAuth on top, I guess it depends on what the purpose is for adding OAuth, normally we would see the purpose of that would be to either allow third-party apps access those APIs, not just the one single page app, or it's because you want to have a more flexibility in how users log in, like letting them, letting you add multi-factor off and things like that without having to mess with your application code. Yeah, social login totally. So you could basically swap whatever login mechanism you have, you rip that out of your app, do a redirect over to the OAuth server, getting back access tokens. Yeah, single sign-on with OpenEconnect. So the question that I would ask here, and again, this goes back to how I talk about OAuth roles, is if you have a single page app with a backend and you're calling an API server, but the question here is, do you consider that API server to be part of the app where it would be the app's backend, or do you consider that API server to be the resource server in OAuth terms? And depending on how you answer that determines how those two pieces fit into the architecture of OAuth, and that will determine then where you want to put the tokens, so. Would it also matter, I think you said it earlier, the threat profile of your application when he's talking about what's the best place to store the access tokens. I'm assuming he's talking about, am I storing it in local storage or something else, or am I storing it in the backend API and just returning a session token and letting the backend make all my requests for me? Yeah, exactly. So if you are doing the sort of storing it in the backend, then your API server isn't a resource server in OAuth terms, it's part of the client, and then you would look at the documents of the OAuth for browser-based apps and the new TMI BFF for references to that architecture, which would then give you guidance on where to store those access tokens. That's where you get those three different options. But if that API server really is a real API server, like that is the resource server that's being accessed by the single-page app, then you have to choose to either go full-on OAuth from the single-page app, getting the access tokens into the single-page app and using it to make those API requests or spin up a new server, which is the backend for that single-page app. And follow that second example that we saw, right? Where the OAuth flow, well, I suppose the OAuth flow could still happen. No, it can't happen in the front end. It's gotta happen in the backend if the backend is gonna be the one holding onto those tokens, right? Yeah, exactly. So a lot of different options, unfortunately, and I say unfortunately because there's no clear winner as to which is the best option, right? There's trade-offs for each option and the job of these specs is to describe the patterns and describe the trade-offs so that you can make an informed decision. But unfortunately, we can't just say do this because it depends on so many other things. In contrast to the authorization code flow with Pixi now, that is, there is a firm stance there. Do Pixi because it is always good. It is, there's never really a situation where you're going to go wrong by adding Pixi. It solves several different kinds of attacks that other things don't solve. So you can say firmly, if you're doing an authorization code flow, you should also be doing Pixi with that flow. And but as far as your architecture of your particular single page app, there still are several different options available to you and none of them are a clear winner, right? Yeah. Well, I get that a lot with other things. So like people always, as a musician, people always want to know, what's the best audio interface? Oh yeah. Or what's the best camera? Well, it kind of depends. I mean, are you shooting a lot in low light or are you shooting a lot outside? You know, so that a lot of that stuff comes into play and you need to... So Julius says he went with storing the tokens on the backend database. Linked with a session ID. Yep. Nice. Yep, cool. That's definitely a valid pattern and you get a bunch of benefits for that at the slight cost of having to route all your API requests through that backend, but yep. It could very well be that people are doing that anyway. I mean, I remember building lots of spa apps with a backend where the backend was only there to serve the spa and everything went through the backend anyway. The spa was just there to make it a better user experience and only store to minimal information about the user, whatever it needed to just display, welcome Aaron or whatever, you know. Yeah, totally. So yeah, it's a lot of sort of just personal preference of how you want to be building your apps, how you want to be thinking about them. Those are what makes those decisions at the end of the day. That's cool. Go ahead. Okay. I was gonna talk about something off topic, so... Go ahead, I was gonna wrap up, so go ahead. Let's go off topic. Well, I was gonna say a completely off topic, but since we talked about cameras, I did recently upgrade to the Canon C70. Oh wow. Which I'm loving, it was a major, major investment, so I'm hoping that it ends up paying off because I had the C100 Mark II and I like it too, but the body of the camera is kind of bulky and really needs one of the larger, more expensive tripods to put it on. And the C70, I still ended up getting a larger tripod because that much money on a tripod that I didn't trust was not gonna happen. But... Yeah, nice, fun. I love camera stuff too. Mostly it's been me studying like the, what do they call it, the triangle? ISO, aperture, and what's the other one? Shutter speed. Shutter speed, yeah. And trying to figure out how to work all that stuff through the menus in my camera, although the menus are pretty similar to the C100, which I got pretty familiar with. Right, right, all the Canon, yeah, all the Canon menus are similar. And that's another example of the trade-offs, right? There's no perfect combination of those three things. You need to use different shutter speeds and ISOs in different situations. Like, benefit of the low ISO is less noise, but then it's darker. Like, you have to, you don't button, there's not a sensitive to light, so if you're in a low light situation, you might have to bump up the ISO, but then that increases your noise, so then you have to compensate for that somehow. And you also get the shutter speed choice, but then you can see how my motion blur is smooth, right? Because I've got my shutter speed set to half of the frame rate, which is one of the general guidances, but that comes at the expense of, now I only have certain values I can use for the other two. So it doesn't work in some situations either, so. Yeah, and thanks for the recommendation for Gerald Dundun, man, because that's been, even though, at least for right now, in most of the videos I'm watching, I'm only understanding about half of it, because he goes into such a depth about, oh yeah, and this has got this little sensor inside, way down deep in the camera that you're never gonna know about, but I know about it and it affects these three things, so make sure that you know how to set these three things because that's gonna be there. But yeah, it's been a lot of fun. Awesome, it's a lot of fun, it's good stuff. It's also a nice break from the tech world too. Oh yeah. All right, we are way off topic on OAuth, but that's cool, this has been fun. Go to octodev.events for a list of where we are doing new events coming up, and if you're interested in the OAuth meetings, you can find those at events.oauth.net. And I guess with that, we will wrap that up here. Thanks everybody for joining, thanks for all the questions, thanks for joining in on the discussion, it's always a lot of fun, and see you again in the next one. Peace.