 to another round of the Oath Happy Hour. We'll join everybody. I'm Aaron, and we have Vittorio here as well. Thank you for joining. Good to see everybody here. Say hello in the chat. Let us know where you're joining from if you are watching this live. Otherwise, say hi in the comments later. So it's been a while since we last met. I think we last chatted before the new year. So happy new year. Happy new year. Exactly. Yeah, we have a couple of, we'll share a couple of updates from what the Oath group has been working on and talk about some of the upcoming events as well, what's going on in the Oath world. And then I'll answer questions if you have them. Otherwise, we'll be just chatting about who knows what. That's how these things go. Yeah, yeah, yeah. They're always very careful because you never know when HR is also listening to us. We don't do anything, we don't do anything bad here. We don't, even if we try, we don't. No, but yes, let's see. Let's start with a quick update on the specs. It's been a quiet spec month, week, couple of weeks, whatever it's been, holidays, you know. Yeah, not a lot of stuff being done over the holidays. But yeah, so just a couple of, I think mostly minor things were published. I think the last, last time we met was like December 14th. So there's only been a couple of drafts updated since then. And I think these were all pretty minor changes to most of them. So the security best current practice, that is again, that document talks all about the most secure way to do Oath as we all have currently agreed upon today and capturing best practices. Yeah. Can I make a suggestion? Not everyone knows that when you go in the specs, there is actually one feature in which you can show the diff so that you can actually see what's changed from the last time. So it might be nice to show people how to do it. That is a fantastic idea. When you're on the draft, make sure you are on the latest version because sometimes you'll follow links and you'll end up in an old version. So make sure you click on the latest one first. We are on 19 here. And then up here, there's diff one and diff two. And I don't actually remember what the difference between the two diffs... I think it's just a layout thingy. Like it's slightly different how it's visualized but the functionality is the same. Okay. So this is a side-by-side view and it is doing the diff of literally like text diffs between the two files as published as an in a text format. So you can see like the draft number changed which is not super interesting. But there are some interesting that the license changed. I wonder what that's about. But you can see that kind of stuff in the diff view which is fantastic. Yeah, exactly. So some removing of quotes. Editorial as in like, yeah, you've got to add quotes and stuff like that. Yeah. Here's something that's a little more substantial. The limit on refresh tokens of being sender constrained or rotating is now limited to public clients. Yeah. Yeah. Yeah. Which makes sense because that's like a classic discussion that we often have which like it is the public client that needs extra measures in there. And there, like I remember discussions about mobile clients and spas. Yeah. Like we like to say native clients and putting a single big grab bag desktop clients, mobile clients and spas. And in fact, the very security characteristics are slightly different. So those ones don't always make it all the way to the spec but my hope is that they will because otherwise, well, it's a best practice. So you are not out of compliance if you don't do it but always good to have a more clarity in my opinion there. But instead others think better to have less words which is also a valid point of view which is why we have discussions here. Yeah. Yeah, it looks like the rest of that is editorial. So that was the only sort of change of any substance but that makes sense. And it's such a good feature for that diff style. I am glad that we had an excuse for showing it. Like when I was offering my own spec I would often actually do diff locally. And then once I discovered that you can just do it like this it was like, wow, that saved me hours probably of work. So that's a very handy feature. Yeah, yeah, it's good. It's good to be able to do that because it's hard to follow the discussions all the time on the mailing lists or on GitHub or wherever the discussions are happening. And sometimes new drafts get put out and you're like, oh, I don't really have been following that one. What's changed? So let's see, we can do the same with rich authorization requests because that'll be, so make sure we're on 10. Look at this side by side. This was only four days apart. You can also see that here as well of like when the two drafts were published. So it looks like we got a new section for token introspection response. Very impressive. And there is the new section, in fact. Ah, it's registering a new value for that. I'm gonna follow these details, Jason and Ray. Okay, that makes sense. I have a feeling the rest of that is just reordering the citations, yeah. So just to summarize, it looks like it's, there is something that Rar introduces and given that the solution might be using the introspection endpoint, here basically saying, hey, we need to have the ability to transmit that information also if the solution uses the introspection response and given that the introspection spec has a mechanism for extending its list of values when the Rar is taking advantage of that mechanism and saying, hey, here is a new thing. I'd like to be able to get out of the introspection endpoint. So pretty straightforward, but again, also interesting to see the extensibility mechanisms that those specs usually bake in at work. Like this is a good example of like, yeah, it was a good idea to add that level of extensibility and to actually have a governing body, like the Yana, which creates this reference so that everyone can actually go and see what things are there, what the semantic is. So all goodness here. Yeah, yeah, so minor, minor small world of spec updates that'll last couple of months, but we'll probably see some more movement coming pretty soon because the next IETF meeting is scheduled and happening and on the calendar for, yeah, Cross Your Fingers, an actual real life event. We are currently planning for the week of March 19th in Vienna. And oh, I need to add you to this since you are also going. I am going, yeah. Well, I bought the ticket and I blocked the calendar. I didn't buy the airplane ticket yet, but the time is coming for doing that. I was hesitating because the IETF is this enormous event. I've seen like so many working groups and to be fair, I only care about the off working group and the scheme working group. And then of course I like networking and everything but like being one entire week in Europe is kind of tough. So I was waiting for this schedule to show up but I'm told that it will show up something like on the 18th or senior. So I better get my airplane ticket before that. Yeah, I wish they put out the schedule earlier. The schedule is so every, all these working groups have meetings throughout the week and that schedule of which meetings and which groups are meeting on what days, it doesn't get published until kind of late in my opinion because this event is happening on March 19th and yeah, I think it's February 18th. So it's only a month in advance that we actually know which day is the OAuth group will be meeting. And there are a lot of people who only participate in one group. And if the OAuth group is meeting Monday and Tuesday then it doesn't really make sense to stay the whole week because then you just are like, well, I guess I'll just hang out in Vienna. I mean, that sounds cool, but like also we have jobs and other work to do. Yeah, well, actually I have a very Italian reason for wanting to know in advance. Almost every time I have a conference in Europe I try to attach a weekend at home in Italy. And so I'd like to know when I have to do this particular thing, but otherwise, yeah, Sackertorte I have a second monitor that I bring with me when I have to walk in remote, luckily apart from the time zone, we have jobs that can be done in remote. So of course we can potentially do it in there. I don't know what's the current COVID situation in Austria, but again. Yeah, cross your fingers. I haven't been looking into it too much yet because it can change so quickly. So we're still over a month out. So anything that we know today could completely change in another month. So yeah, we'll cross your fingers and hope things go well and hope this works out and hope we can all meet there. Yeah. But yeah, that does mean that the, there's a deadline cutoff for publishing new drafts ahead of that meeting. So probably gonna see some frantic pushes of new versions of various OAuth drafts pretty soon. And when he's a very deadline? That is a good question. It is, I wanna say it's like, let's see, is it listed on the main page of all the dates, important dates? Here's all the important dates. Early bird registration, cutoff date for birds of feather. Here's the preliminary, oh, it's not even the final agenda. It's only the first agenda. Yeah. And then you get a week to request rescheduling. Final agenda is, all right. So yeah, March 7th. March 7th. Okay. The other reason for which I did the big guys is that I do have to write one draft, which is the thing that we discussed at the latest OAuth security workshop, which was that standard step up way of, like standard things for adding to the challenge when you get an error and adding parameters to the request or to a authorization server when the API that you're calling has a specific requirement in terms of authentication, how to actually transmit these in standard fashion, given that a lot of people today are doing it in non-standard fashion, because it's not. And during the discussion at the workshop, it was a very popular, it was like a session with the most votes for the entire conference. So I'm confident that this deserves a draft. Then whether the draft would be picked up or not would see, but yeah. So I better get my stuff together and also get our good friend, Brian Candel, who's gonna be a call for, I think. So yeah, we needed to submit before March. So thank you for reminding me. I actually Bonafide forgot. The clock is ticking now. You have 29 days or whatever it is. Or less than that, because February is a short month. Yeah, it is a very short month. Yeah, so yeah, that'll be good. I remember that discussion during the meeting and it seemed like everybody was excited about the possibility of standardizing that type of thing. So yeah, that's good. Actually, along those lines, there's a question here in the chat, which is tangentially related from Mark. I've asked Aaron about this before, but interested to hear thoughts on Okta's new interaction code grant type. It's outside of the OAuth spec. So curious to hear if it's a secure path forward. Yeah, I don't actually remember you asking me that. I'm sorry if I missed the message from some other channel, but thank you for joining here. Yeah, the interaction code grant type is something new and I actually haven't looked into it too much myself because it is still relatively new. I just pulled up the docs on it here though. It sounds like, the reason I say it's related to what you were mentioning, Victoria, is that it sounds like a lot of what can happen in this sort of middle chunk here is related to step up authentication or doing other things. So like you can do the first step of the OAuth, but then it might say, oh, you need to do other things like go reset your password or change or go fill out this other form to apply for something, something, whatever it is. Other things can happen in here that are in this box that are not described by the spec because they're not supposed to be. They don't need to be part of the spec. They just need to be a slot for them. And then eventually you can show that you've done those things by using this, I guess, interaction code to finish out the flow. So yeah, I don't know. Have you seen anything like this before, Vittorio? I haven't, and it's interesting. Like one pet peeve I have is that whenever there is authorization server in their sequence diagrams, I really like to have the authorization endpoint and the token endpoint as that on different things because I'm old and my frontal lobe is getting thinner and thinner with every year. And so my working memory isn't as big. And so like seeing it on the page helps me. But yeah, I have never seen it attempted to be standardized. I've seen it, of course, like this is another thing that everybody does. It's a bit of a different use case than the step up we were discussing earlier because this one goes to the authorization server and it's already, whereas in the other case, you are already talking to the API and something tells you you've got to go back to the authorization server. But this is interesting. It would be really interesting to see expanded on some use cases. Yeah, and you can see here it's clearly building on OAuth and on authorization code in Pixi in particular. It's building on all of that. It's just like adding some new stuff in the middle. It's kind of inserted in the middle of a traditional authorization code flow with Pixi. So yeah, I have a feeling that this is, I'm trying to see if they describe the use cases here, but this is, I do remember the main reason for this is, okay, and this is going to tie into our other thing we want to talk about actually. So the idea is that you can configure what these steps are in your octa tenant, but this I feel like is the key sentence here. So it allows the apps to manage user interactions rather than relying on the browser-based redirect. So what that means is that instead of opening up a browser or launching a browser or redirecting to a browser or whatever those, whatever that is for the platform you're on because desktop and mobile and single-page apps all have different ways they can, they interact with browsers. Instead of a browser, we're talking about doing the, having the app do that interaction that would happen in a browser. So it's meant to build native UI elements into an app such that we'll do things like collect a password, do your, collect your SMS code or whatever two-factor challenge you're doing. And then having that all be built into the app. So this is definitely, I don't want to say controversial, but definitely something that is, this is the thing, this is what you were talking about, Victoria. This is where we have to be careful what we say. Yeah, because I can just watch the video and see what it's like to be able to do it. Yeah, this is one of the classic things that when a, like big advantage of using open standards is that you know that you are walking through a beaten path that a lot of people checked wherever our landmines are similar and like so in general, it's a pretty safe path. Then it doesn't mean that you can only go follow those paths because like every circumstances is different and standards by their very nature try to cast a wide net. So there might be ameliorating circumstances that you can take into account and that you can like delve from. In this particular case, this is one of those things that make my spider sense tingle because it changes one of the very fundamental assumptions we have, which is we go to the browser, we go to the software, we know the browser is a well-threat model. We know that it's the devil, we know. So that doesn't mean that that thing is not doable. It just means it requires an extra caution. It's kind of like I am at midnight at the red light and I have perfect visibility in every direction. If there is a truck which is battling in my direction should I move out of the way of the truck even if it's red or should I respect the red? Clearly there is enough circumstances to move and across the red, but it's something that requires thinking. So it's not that you can't do it, is it requires thinking and this is my very first time here about that. So personally I didn't do any thinking about it so I cannot really comment. Well, so here's the reason that I, here's my take on it. One of the fundamental goals of OAuth when it was originally created was to avoid applications handling passwords, mainly because most of the uses of OAuth were for third-party apps. And that was we had some other companies building an app talking to some other company's API and it's very clear in that case why you don't want to give your Google password to random applications to access the Google API. Everybody like no one's gonna disagree with that. So that was like how OAuth started was for that use case and then it's been expanded a lot over the years to include, to be also useful for first-party apps and that's why for example, you do see when you go log into any Google app like Gmail or YouTube, Gmail and YouTube do not ask you for your Google password. They redirect you to Google, to their OAuth server and you log in there and that lets them do things like consolidate all their multi-factor in one place, use strong multi-factor that are tied to accounts.google.com rather than tied to Gmail.com and that's all fantastic. And you will notice that Google has gone to the extreme in this and actually required that all of their own applications including desktop, mobile, single-page apps, everything has to do this flow and they've just gone all in on OAuth essentially. And this new interaction code grant type is essentially rolling that back and saying actually if you want to build in password dialogues into your app and handle multi-factor yourself, here's how to do it. Now it's better than like not having it because the alternatives to not having this means people are going to do worse things like build in password dialogues and not be able to do multi-factor OAuth. So this enables some of those workflows but it is explicitly putting passwords back into apps. And again, for like a first party app you don't have the trust issue of like I don't trust the app because obviously it's, you know, there's no, it's not like I don't trust the Twitter mobile app with my Twitter password because it is Twitter but you still have other risks involved and other limitations like what type of multi-factor is possible for the ones that are tied to a domain. So there's all those kinds of little tricky things that fall out of or those are the implications of sort of going around that workflow. So I think the reason this exists is because customers do want to take over the login flows completely and there's only so much you can do to convince people to do things. So this is the most secure way and even though we can point to Google and say, look, even Google is doing it. So I think this was created to allow that flexibility in a less bad way, if that makes sense. It's a very good analysis. And I think that the first party versus first party debacle is a difficult one because the Google example is a very good example because it shows both sides of there. Like Google has been for the exact reasons that you described one of the most stringent identity providers in forbidding the use of embedded browsers. Like one very common practice a few years ago was to whenever you had a native client and you wanted to do an off-flow for getting tokens was to embed a surface within your app which would render HTML and basically use that surface to hit the authorization server, do whatever interaction was required and then get your code and get your token. And this thing is not good for multiple reasons as in if you embed the browser, now your hosting application can listen to every keystroke that you type. So it's equivalent to having your username and password in there. And also if you already are signed in in your system browser, now you have it signed in again because your application rightfully doesn't have access to the cookie jar of a system browser. And so like things evolve so that now on mobile pretty much everyone uses the system browser. Things evolve in terms of practices and also capabilities of the operating systems that now allow you to invoke the system browser in a very convenient fashion. But the thing is on the desktop, these didn't happen as much because when you take a native client that's run on the desktop, by calling the system browser, you don't have the same kind of security advantages. And I promise I'll get back to Google in a second. I'm just taking their scenic tour. You don't have the same security advantages because the message pump of the keys is common across the board. So if a process runs as your user, then the chances are that they'll be able to capture keystrokes not just for one app but also the others. And in terms of the experience, you really don't know much about what browser is on the machine. So you can pop it out but you don't know which one comes out. And then there is no easy way of coming back to your app. You cannot know for certain that this thing will come on top because we might be modeled out. So long story short, if you look at the most of the common, very well-adopted native clients on the desktop, they actually still use a dialogue in their memory space, which embeds a browser. And that includes Google. If you install the Google driver module for Mac, the one that syncs your local file system with your Google drive, you'll see that it pops out a dialogue with embedded stuff. Now I had a Twitter thread with the Google guys about it and they rightfully pointed out that there is an option in the dialogue to say, I want to use the system browser. And so if you want to pop out all these embedded pop-in, you can. And if you are an advanced user, you should. And in fact, the GitHub desktop client, which is another very well-adopted native client on the desktop, does that by default. And if the user is advanced, like if you're a developer, you will probably want to use a system browser also because it's less typing. But Google themselves, in that particular case, because, and now it ties to what you said earlier, there are a first party, it's Google application talking to Google backend. So even if they would use the system browser, it would be a bit of a theater because they own every end, they own the app, they own the authorization server, they own the very line party that you're calling. So sure, you can use the system browser and there will be advantages, but in that particular case, they use an experience complications that they said earlier, make it worth, even for them, they most committed, I'd say, among the various relying party providers to off, it makes sense for them to bend things a little when they are a first party. Sorry, it got much longer than I expected. No, that was a good sort of journey on that. No, yeah, it makes sense. And it's a tough one for sure. But yeah, so I think the, bring it back around to the interaction code grant, which now I should probably put this back on the screen, which is Okta's new grant type, specifically for building more native, more parts of the log and flow into the native application. I'm obviously like, is it secure? I mean, yes, no, all at the same time, because what do we really know about any security? It's just how much are you confident at any given point in any system security? So can't say anything really useful about that, but what we can say is, if you are concerned about the security of your native clients and whether those can be manipulated, impersonated, tampered with by end users, then you shouldn't use this because you're giving, you're having users give more of their credentials into the native app when you use this flow. If you do trust the security of your native apps, because you have gone to the trouble of auditing all of your third-party libraries that you're bringing into your single-page app, not hosting anything on CDNs that are not under your own control, et cetera, et cetera, et cetera, not running on untrusted hardware that somebody might slip some malware into, whatever it is, then, you know, those are the kinds of things to think about. So I think having this as an option is better than not having it as an option because not having it as an option means people will do worse things. So at least with this, it's the best way to do this if you are okay with what you're giving up by doing it. And I think this is a good segue to the other question that we were getting in there, the guess that you claimed it wisely because you knew that it was kind of, which is one of the factors that you could actually use for determining whether you are okay with some of those little stretches to occur is the way in which you confirm the identity of your client. And given that you are the editor of 2.1, I'll let you answer this one. Yeah, so the question is, can you explain the difference between public, confidential and credentialed clients? I can't understand the reason why the credentialed client exists. And the short version is, what's the difference between confidential and credentialed? So this has been coming up a lot in when we've been talking about this. So we may end up dropping the term, but the concept exists without the term. The reason we brought the term in in the first place is because the concept does exist in OAuth in general. The difference between confidential and credentialed is meant to be the difference between whether or not the credentials for the client were obtained while the identity of that client what is well-established, like strongly established. So for example, the biggest concrete example here is you register for a developer account on GitHub, GitHub knows who you are because you have an account there. You've agreed to terms of service as a developer, et cetera, et cetera. You go into the developer website and you register a new application and you get a client ID and client secret. That client secret is doing two things, one of which is it's now associated with you as a developer and there is like an entity that is associated with this credential that is responsible for that credential. That is a confidential client. And that's always been the case with that term. There's like hints at this other use of a client secret in OAuth core draft, which we were trying to name credentialed which is when there is a client secret but none of that other stuff is true. So the easy example here to imagine is you build a native app so you can't go and register as a developer and put in a client secret into your native app because then it wouldn't be secret because you've given it to all your users. So you can't do that. So instead what the native app does is it ships with only a client ID and then the first thing the app does when it boots up is it goes and does dynamic client registration or some sort of ping out to the OAuth server where it then gets its own client secret for that instance of the native app. Which basically means as an app developer, I'm gonna ship my app to the app store, 100 people download it. There's a hundred different client secrets all associated with that app. That would be a credentialed client because the credential identifies the instance of the app but it doesn't prove that that app got that credential by being actually associated with me as a developer. You don't have that level of association. So that's the difference there. If I can offer a slogan version is confidential clients do identification, credential client do authentication. As in when you are a confidential client, you are assigned the credential that is tied to your identity, whether it's the user, whether it's the app in itself but that credential has been issued in a context in which you are known. And so it says, okay, when I see this thing I know it's Aaron or it's Aaron's number XYZ. And so this is useful both for protecting that particular interaction in which you are forced to use a credential and so you're already raising the bar of the quality of the security of interaction. And I just happened to also know to whom that thing is corresponding to because it was granted in a context in which I knew who you were. Whereas in the case of credentialed client this is more of a transaction specific trick of like I installed a native client and now as Aaron said, I want to identify these particular instance so that the service knows is like, ah, yeah, you are the same guy that showed up yesterday then whether it's Aaron or Victoria or a Mariam it doesn't matter because the thing that I care about is are you the same that showed up yesterday. So that's mostly the difference. And in terms of use cases the confidential clients are typically singletons. So websites, which varies one instance or like machine credentials. The credentialed clients are either native clients that want an extra level of protection or like a very number of IoT scenarios in which you use that. Yeah, and then tying it back to the discussion from earlier about how much does the authorization server trust this particular client. That's why these things, that's why there's these differentiations. And the reason this is pointed out in the spec at all is because it's pointed out in the context of things like do you want to prompt the user for consent when they're logging in to this application for a confidential client that's a first party client you probably don't need to because it wouldn't really make any sense and you don't really gain any security by having the user confirm what they're trying to do. For the most part there is of course a phishing risk there but that's kind of generally how we treat it. However for a public client or credentialed in both cases because the identity of that application can't be proved by that credential. That is an opportunity where you may want to prompt the user and say, hey, did you know that you're trying to log into this application? And that gives the user a chance to say, oh wait, I was not trying to do that. So I'm gonna cancel out of this transaction. So and a whole host of other sort of decisions like that about token lifetimes or can you use refresh tokens? How long do these things last? How do you need to do MFA, et cetera, et cetera? All those can be based on whether or not the application is confidential, credentialed or public. One thing that we take for granted as identity people but not everyone drops is that the client identifier of an application is considered public information. It's not a secret. And so if you do pure public client grants, as long as you know the client ID you can basically pretend to be any app you want. There are some mechanisms without operating system specific in which for example you can register specific URLs that you only you ever control over. And so you can somewhat contain the damage in some operating systems. But in general, whenever as authorization server or as a relying party, you get something that you know is coming from a public client you can't make assumptions about the weather that is actually the code base that you expect not even the instance because all these property of public clients. And so in that from that point of view, credentialed clients are an interesting middle ground because if you have a very large number of clients they might not be unable to model all of them as a confidential client because it's expensive. But the credentialed client, like especially when coupled with things like dynamic client registration and similar can give you a bit of extra level of protection that enable all the things that Aaron was describing in term of making a confidence decisions about what the particular client can or cannot do. Yeah, I think that's great. Yeah, it's a tough one because there's a lot of gray area here and a lot of it is not hard and fast rules. A lot of it is how confident are you in various aspects of a system and how willing are you to tolerate that lack of confidence? And the answer is often different in different situations and that's fine. It's not there isn't a single right answer which is also again why the interaction code grant type is useful in some cases when you don't have concerns that would make it dangerous. So it's all a big mess of gray area. It's very nuanced. That's a big challenge of this space is that there's so much nuance and standards help when you are close to the yellow brick classic scenario that the standard offers envisioned. But of course the last mile is different for everyone. And that's why sometimes you really cannot avoid if you want to stray from the basic path you can't avoid it to consult with someone that really understands the space and can help evaluate your particular scenario. And unfortunately as we were just talking of here earlier this is one of the biggest challenges in our space that there is such a shortage of knowledge and talent in this space that the bar for entering is high but once you're in you are in demand. So good for fall. Well, and yeah, along those lines that is why both of us actually are publishing content about OAuth, right? We both teach and write blog posts and make videos and do these because we try to share this knowledge and have it be useful to other people. So, Vittorio, you have a series on the OAuth zero website video course. I will- I do, I do. I even did a video here. You did a what for it? I did a bitly. She's- You did a bitly for her. Bitly slash learn identity in common case. Bitly uppercase learn identity, yeah, that one. Ah, okay, that's what happened I found. That's like the direct link to the course. I was younger, I had a more black in my- Yeah, those were basically videos that we captured originally for the people that couldn't travel to headquarters. Like as we said earlier, there is a lot of identity talent shortage and of zero was and is growing like a rocket ship. And so we hired highly talented people with a lot of potential, but not always with a lot of experience in the identity space. And we looked and the landscape of offering out there and we decided that it was worth to actually build something from scratch, just on identity. So not identity of zero, but just identity standards and why things are the way they are and starting from the scenarios all the way down to the parameters. And normally as tradition, we had the first week, everyone in zero with a travel to Bellevue and I would actually lock them for two days in a conference room and I would squeeze them on the whiteboard for two days straight, because we're really tough. But then we just started not scanning and so we said, you know what for people that can't travel here for the first week, we'll just record this video and we used it as internal class for quite a while. And then at some point we said, well, you know what, why not just making this public? And I wasn't saying anything particularly controversial. And so we just made it public and it's out there. And apart from my accent, which you can easily work around because there is a closed captioning, but so far we got good feedback that people find it useful. Even competitors tell me that say, you know what, I send my people to listen to your blubbering when we join. So to me, it's like a great validation. So if you guys want to try, please give me feedback, of course. That's wonderful, yep. And I will also use this opportunity to mention mine as well, which is, I did a similar thing, which is created this workshop. It's a video course. It is not free, but it is about four hours-ish of content and exercises. And same idea, it is about OAuth in general, not an octa training. It's about the concepts in OAuth. And it goes through a lot of the stuff that I cover in my workshops or in other videos on YouTube, and it's all packaged up this way. This, it's not the exact thing that I do for our customers as our private workshops, but it's a lot of that, a lot of the concepts and the knowledge are captured in here. It's in a slightly different format than that. But yeah, I've been getting good feedback on this one as well. About 10, 11,000 students. Wow. Who've taken it, which is awesome. And I do go into, once you're in this platform, you can actually go and post questions and stuff when you're doing the assignments. And I do go in there and answer questions throughout as you're going through. So another really good resource. Udemy is fantastic platform and the bar is high. So of course anything you do is a high quality, but the fact that it's on Udemy is even more testament to its high quality. And it's in the... Fantastic. It is in the Udemy for Business catalog too. So if your work has a Udemy for Business subscription, which they conveniently put a bunch of logos on the screen of companies that have that, which is nice, then you can get, you can get enrolled in this course for free. It's already paid for by the company. So that's a nice perk as well, which also goes for Okta and Auth0 because we do actually have a Udemy for Business subscription. So I've actually been sending people at Okta to this course as a sort of onboarding thing, if they're trying to get up to speed on this because they can access it for free through that catalog. We do. And it's one of the things that I enjoy the most, frankly, like one of the good things of a merge is that the benefit suddenly popped out and I love it. Like whenever I have a free time, I just search and it has everything. So it's on, it's really nice. I love it. It's fun. Yeah. All right. So there's one more question here that is on a new topic, but also relevant. Of course, we can talk about this for a few minutes. Yeah. Do you think it's an okay practice? Okay. Do you think it's an okay practice to store crypto private keys in the browser local storage for signing tokens? It's a good question and it's a tough one. And I think this is gonna get back to a lot of hand-waving and saying it depends because it depends. But the problem is there aren't a lot of better options at the moment. So if you get something that you have to store, if you're writing a JavaScript app and you're getting something from outside and you have to store it somewhere, you pretty much have to store it in local storage. You don't have a lot of options. There's no better storage API in browsers right now. However, it's often the case that you don't have to be doing that in the first place. So there's a lot of different options for doing an OAuth flow that don't involve tokens being seen by JavaScript. And if you can do those, then that's obviously safer. Because the big thing we're worried about here in general is cross-site scripting. So if an attacker can get access into your website through a cross-site scripting attack, then the problem is that they can then see any of that app storage, including any tokens that are stored, which means they can then steal those tokens, use those tokens. And if it's a refresh token, then there's no client secret, obviously. So they would be able to use the refresh token to get new access tokens. And now they're off on their own little world, making API requests outside of the browser. So that's the main concern that exists. So cross-site scripting, so that means auditing your code to make sure that you're not bringing in untrusted libraries, making sure that you're not accidentally running NPM update and pulling in some compromised version of a dependency. And if you're confident enough with your ability to prevent cross-site scripting attacks, then it's not the end of the world to store stuff in local storage. Personally, I don't think I would ever be confident enough that my code has prevented cross-site scripting attacks that I would rather just not have tokens in the browser in the first place. So you can use the backend for front-end pattern, which is where you essentially make your OAuth client up in a server that is paired with the browser. So the browser never sees tokens. And then cross-site scripting attacks don't affect you because the attacker wouldn't be able to steal a token because the browser doesn't even have it. And then other thing I wanna mention on this is there is a browser API for creating a private key that can't be extracted. That's part of the web crypto API. So it's not everywhere yet, but it's nearly everywhere. And you can't use that for OAuth yet because that's what we're trying to do with the D-POP draft is make that possible. But the idea is that if the browser can create a private key where the JavaScript does not have access to the private key, then obviously that can't be stolen, which is good. So it's not the same as storing tokens, external tokens internally. It's more about generating private key and then signing stuff with it. So I think that's a, I'm excited about that future that that's going to enable, but it's not really accessible yet. That was a big sigh. What was that? Well, it's tricky. Let's say that the, sure, not being able to extract the key helps, but people can still get your code to sign their stuff on their behalf. So in general, like these, this thing about putting crypto in JavaScript is complicated. I subscribe to that. I might never be confident enough for doing it. And then like one has to wonder for what you would use it for. Because like they use case mentioning the question, which is like signing tokens seems very controversial. Let's say that that entails the fact that there is an entity which is trusting your particular client, which is the worst kind of client, the browser, to actually be authoritative about the content of the token. And it's very dangerous because it's a browser. So, and also like tokens typically have like at least a bit of lifetime. So whereas I can see for the pop to have a transient keys for protecting particular interactions. So like I'm protecting this particular leg in this particular session. So, sure, it's a key, but it's a key just for that transient thing. I think that that is tolerable. But instead of having a key that actually allow you to mint new tokens in the browser I'd be very, very, very nervous. Like if this would be something that our field would come to me as architect and tell me we are proposing this finger to the customer. The bar that I would give them to convince me that it is something that is viable would be very, very, very, very high. So I'd be very careful about it. And then here I'm not as extreme as some of the people that say that the browser shouldn't see any token at all, which would be like an even lower bar than actually having an active capability of a minting tokens. Like there is a bunch of people saying, no, you can't ever have any token in the browser. You must have a backend and this backend must do everything. I'm not that extreme because I think that that architecture is extraordinarily difficult to put together because in the easy case, it's easy. But if you have like a larger system that has like needs to be in different geographies in which you are selling content in different places in which your APIs have some intelligence that know from where you're calling and serve you different regions. Or like you have a normally detection which is based on the particular browser. So there are a number of complications that don't always make it possible or even convenient money-wise to go through the backend for absolutely everything. Then, so for me personally, okay, we've access tokens at least to be in the browser with all the preoccupations that are listed. The next level is the refresh tokens which are not exactly keys but they are still generative artifacts. Those are artifacts that can be used to build other artifacts. And there, as long as you have a refresh token rotation which is every time you use a refresh token, it stops working and you get a new one, then I think that the four transactions that are not critical, that are not irreversible, that are too valuable and similar, I think that I'm okay with that thing. And I will be okay also once we'll have a sender constraint. Today we don't have it yet. So MTLS is not really practical for native clients and similar. But, so I'm a pretty liberal guy in that respect but when it comes to keys, I think that for me it's a bridge too far. Like having a part from protecting transactions and a deep up having keys in the browser would make me too uncomfortable. So I'd rather like pay some other price, someone else in the solution rather than do it. That makes sense, yep. Well, and that actually kind of gets to back to your new draft that you're gonna work on which is you mentioned a refresh token can be used to get new tokens, right? And then with the rotation that's good but you mentioned only really trusting that for not things that are reversible, right? Things that not the sort of high value transactions because even with rotation, it's, there's still the chance that if someone can steal it in just the right time they can then go run with a new token. And as long as the old, the real app doesn't use it which is a lot of conditions and whatever it's probably not gonna happen but it can. And it means that you technically could be in a situation where an access token was used by an attacker that was obtained through a refresh token that was stolen two months ago and that's not a super great situation to be in and your step up thing is a potential solution there because if the resource server can tell that this access token was created from a refresh token that was first issued two months ago and the user hasn't actually really been involved since then the resource server might say, okay, I'm about to actually go and transfer $10,000 so I want a newer grant. I want an access token that was issued from a user being present only at most five minutes ago and then you could use that step up model or challenge wherever you wanna call it where you throw an error saying, no, I don't trust this access token anymore. It's, the user was there too long ago, please go get a new one and then you have to step, you have the app has to go and get the user involved again and that would stop the attack because the attacker doesn't have access to the user. That is a fantastic use case. Thank you for that. Yes, like the ability- Did you put that into the spec? Yeah, I was seriously considering that. Like the situation in which the API might decide that, yeah, this token might formally still validate but it's not good enough for me is exactly the central use case for that case and the ability to give it to the client instructions to actually do something about it is helpful as well because you could just say, don't like it but then you risk that if the thing that you don't like is something that the authorization server doesn't know about you risk the client to go back to the authorization server and give you a token that the API will still refuse in your scenario in which it's age, then we'd be fine because you'd get a new token and so the API would accept it but if instead the API would say for these particular transactions and need you to do 2FA then if you don't actually specify it when you go back to the authorization server they might just give you the same token as before. So, yeah, great. Thank you for mentioning that use case. Yep. Well, cool. We got a list of things to do now so we should get back to writing some specs. Yeah. But yeah, we are at the top of the hour so thanks everybody for joining. We will be back at some point, I don't know when yet but you can always find the next stream coming up at octadev.events and we'll drop that into the chat. That's where we post a schedule of the upcoming happy hours as well as other events that are happening that either of us or the rest of our colleagues are going to participating in conference talks, things like that. So, only a little bit scheduled right now but we'll get some more stuff on the calendar and you can always add this to your own Google or Apple calendar as well. There's a iCal feed right there. So, yeah, this was fun. Good discussion. Good discussion today and I think we both gave ourselves a bit of a to-do list out of this talk. So, we will get on that. All right. All right. Thanks everybody. Thanks everybody. See ya.