 Good morning. Good afternoon. I'm Aaron Perrecchi, and I'm joined by Vittorio. Ciao. Hi. We are here for another round of OAuth Happy Hour. Going to be chatting about all things OAuth, talk about what's new, what's been happening, what to expect coming soon, and answer questions. So if you have any questions throughout the stream today, feel free to drop them in the chat, and we'll keep an eye on things and see what we can do, see what we can help you with. So, where do we start? Last time we chatted, I think, was a while ago, quite a while ago. Oh, yeah. It was like July. July, right before the IETF meeting in Philadelphia. And, yeah, so we were preparing for that, going into that event, and then it's been all summer now. Summer just flew by, so. Yeah. It's very sad. Although, here in the Pacific Northwest, it's being very unusual. Like, normally at this time, I am wondering why I left Italy, and I came in this dreary place. And instead, it's gorgeous outside. In fact, the trees are kind of like, is it fall? Is it not fall? Some of them started with the leaves, other are kind of like still green, and all the leaves on the floor are dry, so it's like a big powder. Instead, normally it's a paste, because it rains. So, yeah, but unfortunately, summer is indeed over. It's officially over, but I think the weather hasn't got the memo yet, because it is still extremely sunny in Portland as well, so. Pretty nice. But technically, our point is also Pacific Northwest, right? Yes, yes. So, okay, where do we start? Let's see, there's been a couple of events that, after Philadelphia, that you went to, which I missed out on several of them, so I would love to get a recap. Well, yeah, I was privileged to be able to go to a number of events, and one was a Garter IIM, which happened in August, and it was in Vegas, and it was very interesting, because you and I are blue colors. We work at the protocol level, but that's not the kind of conversation that you hear in those hallways. Here, you actually have the people that make decisions about how the money is going to be spent. So, they have an impact on us blue colors, but they don't raise, like, to go there and say, well, you should use Pixie, and they say Pixie, what? Is it a new kind of drug? So, you can really argue at that level, and it's so healthy. And it was interesting, like, they did a nice presentation about the hype cycle, and when they dug deeper in the quadrant, I'd say that the thing that was most relevant for us in the protocol space was a session about the future, in which they were looking at traditional IIM, what we call modern IIM, which is where we are now, and then the identity fabric, which is this hypothetical feature in which some of the things that today we are doing explicitly, like event passing, anomaly detection, all of that stuff, will be actually part of the identity infrastructure. And they mentioned a number of protocols, including OpenID Connect and OFF as being completely mainstream and consolidated. So, that's good news for us. Let's say that at this point, I don't think anyone here has any doubt, but we are actually truly mainstream. But the other interesting thing is that they made a list of open protocols and they mentioned OPPA, which is not a protocol, it's more of a product. But to me, it points to the fact that authorization is becoming more important as a topic if these people end up putting these in there. And that was a garter. The other thing that happened was the identity week Asia, which was in Singapore. I was in vacation in Singapore, so I spent a couple of days in there. And the interesting part in there was like all the buzz about verifiable credentials. And I'd seen a couple of presentations which were really interesting, one from the Singapore government that was very successful in creating an ID system without verifiable credentials. But now that they succeeded, they are starting to look at interop systems. And this will tie back to when we talk about selective disclosure of JWT in a moment. And the other was Ayata, that company that deals with trouble. And they brought up really good use for some of those things, like the fact that whenever you travel to a different country, the COVID requirements are a nightmare, a labyrinth of different combinations. And the key point is the fact that if we have a system that formalizes those things as credentials, like your test results, your vaccines, your password, as credentials, if you have some software that can do the right combination for the job, then it would be so much easier for users. So valid use case, unclear how we get there. As part of this, we have one from OIX, Nick Madresho, that presented the vision of smart wallets, which eventually led to an announcement the week after on the Linux Foundation that announced the open wallet foundation. And long story short, he described wallets as entities. But the thing that I haven't heard before was this idea that you could push to such wallets the requirements that just like the one that I just described saying, okay, dear wallet, I need a test which is not older than 48 hours. I need a vaccination which needs to have been happening in the last six months and not older. And I need a password from these list of things. So kind of like imagine a smart contract that can run on your wallet. That was something new that I never heard before. And then that's interesting. It is interesting. And I think crazy. Think of what normally happens in like what you hear in the news about smart wallets in which like both smart wallets are so smart, but occasionally they make $100 million evaporate because it was like some tiny bug. And when you read the explanation of what happened, you have to read it three, four times before getting what happened. And now imagine that that thing lives in your pocket. Personally, I find it terrifying, but we'll see what happens. Maybe I'm just old and new people will be comfortable with this. We'll see. So actually, you mentioned the vaccine info as a part of that. And what I think is really interesting is that there are now two of these that exist that both follow the verifiable credentials. The one in the US follows the actual verifiable credentials spec. And the one in Europe follows a different set of specs, but same architecturally, same thing architecturally. And it already is a thing. Like we've been using it now for like a couple of years, right? It's like you can get your vaccine card, scan a QR code, and it goes into your iPhone in your Apple wallet. And that's all done through verifiable credentials. There is no blockchain involved, just like Jots and signatures and public keys and issuers. And then the apps that verify it have the similar, they know how to read that. And yeah, it's really interesting. It's happening. And the Apple wallet is the place where those live. There's no smarts about like other than the scanning app just knows how to sort of like show the data in the QR code. There's no smarts about like, oh, we need to combine multiple factors together and all that kind of stuff. I think that's super cool. But we're getting there. And like it's not just theoretical, I guess, right? It's not just theoretical. You're right. And there's more than that. Like there is a clear up. Like if you ever went through clear, it has the possibility of uploading your vaccine. And it will give you the verifiable credential, which I had to use for I don't remember which event. I think it might have been, I would say, or like some event required the clear up and they used it to validate the fact that you were vaccinated. Now, the twist for these is that when people talk about verifiable credentials, they usually have the issuance part in which you get the stuff from the issuer tied to some kind of keys that you have in your wallet. And then whenever you need to do a presentation, this thing is you send what you got, which has the signature of the issuer, but then you sign this thing with your own key. And there is a tie between the signed part from the issuer and the key signature so that it truly shows that it's you that received that original issuance. In the case of the verifiable credentials that you get for the vaccine, it's not like that at all. Let's say that it's just a QR code, which makes it easier for the machine to read instead of a jot or any other format, but it's just a static thing. And what's worse, first time I saw it, is if you pull out your iPhone and you train your camera on my QR code, you are gonna, and you follow the link that the camera recognizes, you're gonna install on your wallet my verifiable credential. That's what happens. It's a fun party trick. I've definitely done that several times. I have many vaccine cards on my phone now for different people who I've hung out with. Well, that's the thing. I have a techno optimism. Oh, verifiable credentials. Yeah. And then it is actually lower security in a way than showing your paper thing, in a way I always bring with me. Because if a paper thing, you can look at it, but you can't steal it. Whereas someone might pretend to use a validation app and get your thing and then be you whatever they want. So there is also this aspect of, like, just doing a little step in the direction doesn't necessarily mean that we are in a better place. So that's one of the things that the machine can do is to have a lot of people in the space that varies a lot of hype and people often don't stop and think what are the unintended consequences of some of this stuff. One of the other fun ones is I was able to read enough for the documentation about how this stuff works to be able to create, to issue my own, essentially. Issue my own credential. Because it's all public open specs. You have to encode it in a specific way to make it a QR code. You have to sign it and host your public key somewhere. But you can do that and you can then scan that into your Apple wallet. Because Apple does not have a list of known issuers, trusted issuers. They will accept any issuer. So I was able to actually create one of these for... Let's see if I can get this on the screen. I was able to create one of these for my cat. And now my cat has a vaccine card in my phone that if you scan this, you will then also have my cat's vaccine info in your phone. And you can see the issuer there is my own web server. It's not a real healthcare provider. But Apple doesn't have a list of real healthcare providers. They just accept any signature at any URL. Scan it, but my monitor is not... Too low res. Yeah. But I thought that was interesting that Apple doesn't... Apparently doesn't care about the list of... Okay, only these healthcare providers are allowed to issue these QR codes. But other vaccination check apps do care. They can't. Let's say that... They can't. I didn't want you to blow this entire time up to talk about... Yeah. It is a key problem because very federal credentials try very hard for good reasons to eliminate a number of intermediaries. But the thing is that very often the intermediaries run useful logic. So when you go to an open ID provider and you say, give me a token. And here is the URL where you should send it once you've provided it. Or give me a code, whatever. The OP will go and look into a list. And if that URL isn't there, it's going to say, nope, ain't going to give you anything at all. Instead, in this brave new word, we want to enable people to have something in their pocket and play it with whomever. But that means that it's also hard to protect them. And so, sure, the wallet is a potential place where you might have some of it logic. But it's hard. Then at this point, sure, you can say, I'll execute this logic without the calling home or locating the app. But are you going to scale if you want, for example, to do exactly what you just said? Validate only the proper providers. Where is the registry of all the providers and who guarantees for it? Some people will come up and say, well, I don't care where it lives. It's not where. It's how you decide to trust it. But anyway, my recommendation is that we move on. Otherwise, and then we're talking about these. OK. Yes. So that was Identity Week. And then the last one that I'm going to mention, I'm going to talk about this. Identity Week. And then the last one that I'm going to mention was TPAC. TPAC is the, you see me stretch like a crane in my neck because I have my calendar on the wall. But I have a 34 inches that is blocking the last four months of a year. And so in order to see the last four months, I have to stretch up. TPAC is the W3C conference. It was a fantastic, I have never been to that conference and it was really good. And they did all sorts of W3C related things. And in my case, what I was really interested into was the browser impact on Identity. And so I spent the time in the privacy working group, in the FedID work group, which is the one that we put together so that we actually have a forum between browser vendors and identity vendors. And then I spent time on the web often discussions because, you know, PASCII and all of that stuff. Super interesting, really important problems, like in particular. Now, you know, browsers did a number of things like deprecating for particle keys or in the process of deprecating for particle keys, which impact identity, but mostly for things like logout and the background refresh token. But now we are starting to talk in practice about bounce tracking protection, which is basically whenever you have an advertiser that stops you from clicking a particular link by redirecting you to an intermediate domain, copying all the context information and then sending you back on your way as a way of walking around the third-party block, third-party cookie blocking. And so browsers are trying to find ways of stopping this. And for example, they might say, okay, if this domain is a tracker domain, I'm going to drop all the query string parameters whenever you redirect in there. Now imagine you had an offerization server or an OpenID provider and you receive a request with no query string parameters. Well, you might say let's use par, okay, possible solution, but you see a problem with this, right? And so the browsers are trying to do the right thing, like Google came up with a proposal saying, if you interacted with that domain at least once, I'll let you keep all of your parameters like in the last month. But it's not a perfect solution because imagine that that domain might be used for another reason. Say that domain is a newspaper and you go there for reading the newspaper, not for an identity provider. Now anyone that has interactivity can become a tracker and walk around these things up. Complicated problem. And instead of the other side, like the web-offend part, you can imagine there was a lot of discussion about pass keys and there are complications, mostly because now in the new word, every key that is not guided to a security key might be created by operating systems as a roaming key. And so if you were doing web-offend in the past and you were using it as a mechanism to enforce that authentication is coming from a particular device, now you will no longer be able to do it. And there are like a number of things that might happen to help like your authenticator might get what we call an attestation, which means that particular piece of code went through a certification process and now there is a key that certifies, yes, these keys, particularly from this operating system, but a lot of vendors don't want to do it. There is a new mechanism, which is called DPK, is an extra key tied to the device that you can use to gather with web-offend, but it has the same problem with attestation. It's not very useful and vendors are crazy about that. So next week in Seattle, there's going to be the authenticate conference, where I'm going to present by the way, and the FIDO plenary, and a lot of these things will be discussed at now. But anyway, those are the events where I went since the last time we met. Yeah, I didn't realize that was next week. That's great. I see this finger, and I have a Pavlovian reaction on my heart missing a bit because I'm almost done with my deck, but not quite, and I was supposed to give it in one, like one week ago, but circumstances, so I'm late, and so I feel very guilty about it. I'm very excited because this is like now to do a little bit of self-promotion. We'll do this session about how attacks for CIM are different than attacks that you would normally get from workforce, and how you need to defend that in a different way. Like for example, in CIM, you have functions where people to register themselves, to sign up, which you don't have on workforce, or the kind of credential stuffing that you expect from work is usually long time, they wait for the right moment, and then they try to be as quiet as possible because they want to do espionage. Whereas CIM is super loud, it lasts for two months, and then suddenly stops for no reason. So in 0, we've been doing these for many, many years, so we have a lot of data to analyze, and so I'm very excited that we'll have a chance to tell a bigger story about this thing. So that's the plan for next week. That's very cool. Is that going to be eventually publicly accessible, or is it only for live audience, or are there any replays? So all the sessions I did in the former years of the conference are available on YouTube. So my assumption is that this year will be the same, but I can't tell for certain, but I think that will be the case. Very cool. All right. That's a lot of events, a lot of stuff happening, and then in terms of next events, so you mentioned authenticate next week, and then in three weeks from now is the next IETF meeting already. So that's IETF 115 in London, so there'll be a bunch more discussions from all the different groups happening there, and I will be there. I'm excited. Yeah, it'll be fun. So that takes us to... Let's talk about some of the drafts, some of the work that's been done in the OAuth group. So we've got a list of drafts here. Since July, there's actually been a whole bunch that have gotten updates. We've got OAuth 2.1, D-POP, SDJOT, KSAP, Security, BCP, and the step-up protocol, which was just yesterday published, an update. Yeah. So all sorts of things. Let's start at the bottom, I guess. OAuth 2.1, so this was published, this update published just after or possibly even during Philadelphia. I can't remember now, it's been a while, obviously. They still have a bug where it doesn't redirect. I filed this ages ago. Normally, their URL is redirect. If you don't include the draft number, it'll auto add to the latest, but their code thinks the DASH 1 is a version number that doesn't exist. So I have to type it in manually. I have notified them of this bug, and they said they were going to fix it. Anyway, Draft 6 published July 24th. You can click through to the diff to see quickly what's changed. This was based on the discussions at Philadelphia. I remember adding this section about access token validation because it turns out nowhere in the spec in OAuth 2 doesn't actually say that you have to validate access tokens. It just wasn't there. So now there's a section that talks about validating access tokens. Nothing complicated. Obviously, there's not a lot you can say about validating access tokens other than validate it to the standards that you have defined of what it means to be valid. There's another opportunity to, for example, reference out to the JOT profile document. Very nice. Yes. Then there was one more, which was... I saw some diffs down here, which I think were just reordering these sections about how native apps handle redirect URIs to reorder them to be in order of which are preferred. That's all it is. It's just reordering the sections. What is the best way to handle redirect URLs in native apps? Well, it's claimed HTTPS URLs, and then the private URL schemes are not super great because there's no registry for them and anybody can step on each other's toes that way. So they got reordered. Nothing huge there. So yeah, mostly minor updates on that one. Yeah, that section is one of the reasons for which I'm very disappointed that I will not be at ITF London because I expect that the native client stuff will be discussed. And I think that of all the feedback that you got, I might have been the more vocal person about differentiating between mobile apps and desktop apps, having the same guidance to both is not a great service for developers because they have a different security characteristics and they have a different experience characteristics, unfortunately. And so just having a blank recommendation, like, for example, use the system browser, doesn't have the same thrust behind it. Like, for example, the argument of a keylogger which one of the main reasons for which you don't want to embed a browser in a mobile app and when you switch to the system browser, the system browser is out of reach of anyone else. So you actually defeat that attack. But on the desktop, when the message pump is common everywhere and the isolation between apps, it's not that it doesn't exist, but it's not as consistent and as strong as then it makes less sense. And in terms of experience, so long story short, I already know what would happen if you guys would discuss this. There will be no champion for this particular angle. And so you just make updates reflecting the discussion. And then on the list, I'll have to bug you to try to make one of those emails instead of in person. So just be prepared about it. Okay. I will try to keep that in mind and see how we can do there. Oh, and then there were a couple more things and thanks for this question. What about credentialed clients? Was that removed? Yes. So I actually found the diff where I wrote in the notes about what changed. So there were a couple of other things and up at the top, in fact, is removing credentialed client term. So that was something that we discussed before and then actually went and finalized based on the discussions there, which is, yeah. That was not actually a helpful term. It wasn't adding anything. It was not actually clarifying. It was actually making things more confusing. So instead of what we did was take that out and updated the definition of confidential and public to only mean whether the client has credentials, nothing about the trust level of the client. Because the original OAuth2 language had a lot of language around trust levels of the different of confidential public clients based on whether it has credentials, which is only the case in certain limited situations. So now confidential public only refers to whether it has credentials. It does not care how the credentials were obtained. So a native app can be a confidential client if you are able to give it credentials. If you give it credentials, it's a confidential client. However, if you give it credentials by shipping it into the app store, please don't do that because it doesn't do it. It doesn't serve any purpose. But technically it would be a confidential client because a confidential client doesn't imply trust anymore. So yeah, that was another big change there. And as usual here, we have like a layering problem because from a protocol perspective, all of these makes perfect sense. As in all we care about in this particular domain is the ability of the app to assert its identity regardless of its history, how it got there. From the developer perspective, there is a lot of these like correlation in which traditionally they've been trained to consider confidential clients, things like singletones like websites for which there is one instance and so asserting the identity of an instance makes sense. Well, it makes sense. It has a number of properties that come with it which are not strictly tied to the fact that it can assert its own identity. It's just like it's a singleton and there's such it has a number of things that you do with it. Now the moment in which you move to a different model in which like it's just the credential you have. Now like if you have multiple instances of Slack and Slack, all instances can make a statement about their identity. Then you can do other interesting things but until now all the native client were treated as a contrast your identity because a client ID isn't a secret and anyone can assert it. So there are a number of dynamics which are not entirely obvious and I think we needed to help developers to understand the impact of these things so that when they hear confidential client they don't also hear singleton but they just hear confidential client from a protocol perspective and they don't make assumptions that were true until, mostly true in practice until recently. Yeah, and I think it's going to be quite a bit of work and I should probably do some more videos on YouTube about it and I've talked a little bit about it before but I think it's going to take another campaign and yeah this comment is any app from any store can be decompiled in a secret we've stolen exactly which is why you can't put any secrets in mobile apps that you're shipping into the app store. It's just not, it's not going to work but what you can do is if you install a secret after the app is downloaded and it's somehow hand wavy somehow then all of a sudden you can use the secret and there are examples of this they're just extremely limited because it doesn't actually, it doesn't happen much because it's adding more steps like one example would be you download the app from the app store but before you're able to log in you have to go to your IT department and get them to install a certificate on your device and that is the confident that turns it into a confidential client or how was that secret provisioned? Not during download where anybody can copy it it was provisioned to the one device which turns it into a confidential client is that a very common scenario? No but that's an example of turning a native app that was shipped in an app store into a confidential client by doing out of band you know enrollment of it essentially so there's a number of different subtle scenarios like that that are interesting but anyway hopefully the language in the spec is now more clear that confidential only means only means has credentials the other way is dynamic client registration which is another subtle difference because yeah that one app can wake up go get credentials from the OAuth server but in doing so it doesn't establish its trust level by doing that because there was no way because anybody could have made that dynamic registration call unless you add authentication to that somehow which then gets back to the how do you authenticate the first call so these are really interesting problems and they are worth talking about more and discussing more and documenting somewhere I don't know if a spec is the right place to document them but at least we can do blog posts and videos and stuff about these different different situations yep yeah okay moving on let's talk about the next one depop depop we've mentioned several times now and it's very nearly at the end of its life cycle I believe the previous update was extremely minor let's find out if I remember correctly one month apart and it was something got added that caused a bunch of pages to get re-numbered there is a new section but I think this was part of like the not last call what's the because it's already been through last call right it's like now in the queue in the publication queue there are other comments throughout that process very nice is that sound familiar more or less more or less yeah but that one's very oh yeah and you can see on the on the status page it is publication requested so hopefully it'll get its rfc number very soon indeed and that would be a finally I'm sure that a number of people are holding back because not rfc yet but the only other alternative is not very viable so yeah looking forward for that bit to be flipped yeah and then hopefully there's no mistakes in the spec that cause it to be a problem once people go to actually implement there's some implementations of it already but I know I know some people are definitely waiting for the rfc yeah and also like in the end the I think a lot of this stuff will reveal itself once it's being used at scale because toy examples are easy but the moment in which you actually have a large especially to manage like the way in which products will surface not just at scale level but at the management level I think will make or break the success of many solutions yep yep okay let's talk about SDJOTS so that was selective selective disclosures for JOTS this is a fascinating one this is new this is adopted by the OAuth group so it's draft IETF OAuth or draft IETF rather than draft FET I believe that call for adoption happened in Philadelphia and then it got published the draft zero under the OAuth group in August previously it was an individual draft right before that the I didn't realize the name had changed for JOTS now is the name so what is this this is a mechanism for creating a JOT but instead of having the data visible in the JOT it allows the creator to reveal claims only certain ones when they want to which sounds kind of magical for a JOT you create it and then it's kind of done and that's why it's very clever do you want to add that sure we actually had one entire episode of identity unlocked on SDJOT and not just that but in general the privacy preserving measures that are available as options out there as you mentioned JOTS normally are one package deal in which in order for the magic to work you need to be able to check a signature and that means that you cannot change anything inside of a token and so here you have only two ways of getting out of this if you want to be able for example to disclose only a certain subset of the content one is you can change the crypto that you use and you can use like exotic cryptography that allow you to extract subsets of things without breaking the signature and it works and it's very powerful because you can go beyond the sheer disclosure you can also do things like I will prove at the time older than 35 I know you tell because of this here but yes older than 35 without revealing they actually birthed it which might surprise but those stocks have the disadvantage that they are very exotic so they are not available across the board they are no proven libraries that governments will be okay with so it's still very much the frontier they approach that as the reality does which again Daniel fact one of the co-authors explains beautifully in during the episode basically is you get a classic jot with a classic list of claims but the value of these claims are sorted hashes of the actual values so stuff that humans cannot read and then together with these things you get a list of actual values the list of claims types the real claim values and the salt that was used to create the ash which is the actual jot and so now that you received this thing when you wanted to disclose these a subset of the claims that you received to an relying party to a resource whatever you send the entire jot with all of the sorted values which are unreadable and only for the ones that you want to disclose you send the salts so that the receiver can actually like the salts and the values so that the receiver can now take the actual values that they received from you to gather with the jot they can calculate the ash of those values that you provide and verify that the value that comes out is exactly the same value that is in the signed jot so at that point you successfully disclosed those values and you prove that those values are being issued by the provider it's so smart because you don't need any advanced crypto you are just doing a classic signatures and you are doing ash functions and the things that make these different from usual apart from these little gimmick is that the jot that you receive has no audience normally you receive a jot and this jot says I am meant to be used with the resource X and in the JWT profile for access tokens it's mandatory that you have an audience for security reasons but in here you don't and given that you don't the spec admits an extra mechanism when you are doing presentations in which you could just like we said earlier for very fabled credentials, you could as a client have a key associated to yourself and this key might be referenced by the signed jot that you receive and whenever you do a presentation in which you represent the entire jot with all the salts the subset of the values sorry the entire jot with all the hashes and the selection of the claims that you want to disclose with the salts and you can sign the entire thing with your own key so that at that point you no longer need the audience as a way to prove that like that token has been used by the legitimate user in the legitimate context you still have a risk of replaying but you cannot have someone that steals that thing and then sends it to a data recipient for example so lots of interesting moving parts I love this spec I love this spec because it's very practical very concrete and will give us the chance to see whether people care about the stuff or not because as identity experts of course we've over indexed and we say of course you want a self like you want a selective disclosure why wouldn't you but the license plate of my car says WS star which is except of specs from many years ago which died died because rest and off and HTTPS and barrier tokens proved to be good enough for the job and I remember back in the day people saying of course people want non-repudiation how can they design a system that doesn't have no repudiation turns out people didn't care at all so here we might expect people, developers, not to care all that much about selective disclosure or they might care a lot but the point is SDJOT is a very simple spec which gives people the chance of implementing this feature very easily the bar is low so it will give us the chance to test in the wild whether people care about this feature or not what they want to do with it and help them to do it without doing exotic stuff like doing really advanced crypto which has a lot of challenges and then if it turns out that people really do want selective disclosure then the investment in creating good libraries for these advanced crypto in multiple stacks so that it becomes viable will be easier to do as opposed to now in which it smells a bit sometimes like WS-STAR and non-repudiation I think that's really really interesting it will give people a chance to sort of more easily step into those waters rather than having to like rip everything out and replace all the way you're doing all of your authentication stuff with new crypto it's a much easier to go from what you're doing today to just add in SDJOT where you need it so I think that's very promising very interesting to see where that goes for sure so that is I know I can do a great job explaining these I highly recommend the listeners go to the podcast episode because Daniel not only is like a fantastic cryptographer he is so good at explaining this stuff he really is a pleasure to listen to it's so great I highly highly recommend it yeah definitely I drop that link in the chat and it's on the screen here as well so yeah um I'll be a fun one oh here's a comment about SDJOT a use case of selective disclosures that you can do least privilege access you can hide in a JOT unnecessary claims so yeah you'll be able to present just the ones that are needed rather than having to share everything that goes into a JOT today yeah that's the very core of the idea of selective disclosure for sure the cool okay let's move on to browser based apps that is another one I'm working on it kind of got put on hold for a little while while we were figuring things out or prioritizing 2.1 but some new discussions have been happening on that and I am very excited about where that's going now I am this if you're not familiar with it this is basically recommendations for people building browser based apps as in single page apps javascript apps that are running purely in a browser and one of the biggest new changes in this version is actually a couple it's had a lot of good discussion lately but um this is a new section that talks about the architecture this is basically different options you have for how you're going to build how you're going to architect your apps using javascript this back end for front end proxy is one that pittorio originally actually wrote up and brought a draft talked about a draft with the group and the group decided to not pursue at that time but it was a pattern that exists people are doing okay be very explicit about what happened let's be very clear we put this thing out and we started discussing but we never suggested it for adoption it's not like the group decided not to vote there was no vote because we are busy and so it was an interesting discussion which we'll eventually pick up but there was no vote and people said no there was no vote it's even more specific than that you didn't propose the idea of the architecture even you proposed a very specific mechanism that's useful in this architecture of how to pass the bits around so that was the actual document we want to be able to move this access token from here to here here's how to do it, here's the endpoints needed and that was that was the draft so any further clarification the term BFF is somewhat ambiguous but in the general case backend for frontend is like a vast majority of people will think of this scenario in which the backend does absolutely everything let's say that in the scenario of browser based apps you'll have JavaScript that talks only the backend for everything both for acquiring tokens and for calling APIs so if you're calling APIs that are not part of your app your backend in the general case of BFF is proxying all of your calls so you basically needed to have a very complex BFF and it's important to clarify this because there are security characteristics that are part of all of these like at this point the only thing between you and everything is the cookie that is the session between your user agent and your own backend and then everything else is there so your threat model is just the cookies for calling the APIs which has its own characteristics the proposal that Brian and myself put together is for a hybrid model in which you defer to the backend only the acquisition and caching of the tokens so that there is no code in the browser that can be executed for getting tokens from the identity provider you always ask your own backend but the call to APIs remains in the hands of the browser which does get access tokens for calling the APIs which has a number of practical advantages like if for example you are in a system that is geodistributed and you want like a clients that are close to a particular backend for the APIs to not go through a central place like now you can have all the browsers that hold their own access tokens and make the calls but the critical code which is the one for acquiring new tokens is still locked in the in the backend so the tradeoff is that now the browser which is the hardest thing to protect does have access tokens but it's somewhat less dangerous than the classic case in which JavaScript does absolutely everything like the classic JavaScript that does authorization code plus pixie and rotating refresh tokens is the one in which you have the most critical code running in the least protected environment so one of the discussions that also happened is that the proposal that Brian and myself did which was BFF TMI which was a fantastic name but BFF was misleading because most of people think of BFF as the entire thing including API proxy so before we go back with a new version to the mailing list we have to change the name and I think if Brian found a good one but it's not as memorable so now I don't remember it. Yeah exactly so what I did with this browser based draft in the meantime before you go back to the specific mechanism of how this works in your extension I have at least now captured the idea of having the access token be acquired by a backend but then passed to the browser this document describes that this is what the diagram is on the screen describes that this is a possibility and that it does have different characteristics than both full backend proxying everything or tokens only in the browser with no backend at all there is this middle ground but this browser based draft spec does not describe how to actually implement this like what you should do to actually pass the token back and forth and that's what your draft did was actually describe the specifics Perfect yeah and fantastic clarification and I think that for the browser up for the nature of the document that you're doing here that is the right level of to tell people this is the topology then if we really want to take advantage of this as an industry I think that we'll need to formalize some of this stuff mostly for two reasons one is that there are security considerations that people might not do organically I didn't like when we first did the proposal we actually were open to a couple of redirected tax and in the mailing list I think it was Neil Madden that suggested to add a particular header that would protect those endpoints from those attacks so that kind of insight is something that people might not get to on their own and the other is that if we do this in a formal way in which the endpoints and the messages that you use between the JavaScript and your backend are formalized then we can get into a world in which we have JavaScript libraries that you don't have to configure at all for this thing because today when you have a JavaScript library for doing a single page with OAuth you need to specify the authorization server from where you're going to get tokens you need to specify your client ID you need sometimes to specify your redirect URI in a world in which you are doing everything with the backend the JavaScript is completely empty as in if the library knows where to go for asking for these tokens in its own backend it can be a local endpoint so you don't need to configure it it will just work and then the work that you do is all on the backend where you probably already did it because you might have an existing middleware so I think that it's great to describe to people the topology I think that we need to find the time to define better those patterns so that we can get to an interoperable place because today things like Next.js already applied this pattern but of course they do it on their own way for the very reason of standard guidance think if we'd be able to just standardize this endpoint and now you can combine React, Angular View, whatever you have of a JavaScript with whatever backend Java, Node .NET and just by defining that one little endpoint on your own backend so my hope is that we'll get there yeah I think it's a great idea and I agree that the there absolutely is value in standardizing the actual mechanism until then at least now we've sort of surface this pattern and can talk about the characteristics that make it different from the other patterns which is mostly what this draft is doing is just talking about talking about the different architectures and the tradeoffs because none of these are there is no clear best option in these patterns they just have different tradeoffs and that's actually something that's been come up a lot on the discussions about this lately which as the editor is the thing that is my job and the continual challenge which is to balance all the different opinions and try to make sure that they're all accurately represented because people have very strong opinions about these and some people say you should never have access tokens in the browser at all no reason why that's okay always keep access tokens out of browsers and they will argue that forever and then there's other people who will say sometimes you just have to there's benefits to it and if you're going to do it here's what you need to know to make to do it safely and that's what this is trying to balance is like it's not saying only do this it's saying if you are doing this here are the things to keep in mind so you don't do it poorly so that's enough on that one there were quite a number of changes in this draft so do take a look if you're curious because this one has changed quite a bit recently ever since the Philadelphia meeting so lots of good stuff in there okay I think there was only a couple left and we're almost at the end of the time anyway so that works security bcp this one is I swear it's almost done but it's been almost done for like two years at some point we'll finish it I don't even remember what changed recently in this because I feel like we just keep having small things that get added to it yeah like here's a new new thing under the attacks list right it's authorization server redirecting to a phishing site so a new section added about that I honestly don't even remember the discussion that prompted this but this is you know it's a thing to keep in mind which is the point of this draft I remember these ones was an analyst a security analyst that brought up as a potential attack to include the links in phishing emails that user redirects offered by identity providers and that will actually successfully go through the validation of the filters because it was non-identity providers but they basically it's an open semi open redirect and so no one can gain unduly access these can be used for skipping anti-phishing filters anti-phishing filters and you know how we always all the security people are like always make sure you're looking at the links that you get in emails and make sure you don't click suspicious URLs, check the link before all that kind of stuff like if you get a link to your authorization server that's like the real domain of your real OAuth server of course it looks fine but if it is able to then redirect you to the attacker site it is then able to be used for phishing attacks not you don't even have to log into it because some of these will redirect out on error cases as well so this talks about it I think oh yeah the very loaded word trust is in here the AS should only automatically redirect the user agent if it trusts the redirect URI so that's basically the way of telling you what to do without telling you what to do because how do you trust a redirect URL well it depends on a lot of things depends on what the rest of your system is doing but it is worth pointing out so it's in here very nice and then the last one sorry we only have a minute for your moment to shine here so the step up off didn't change much because ultimately we are doing something very simple and it turns out so far not very controversial we just finished the last the working group last call so hopefully we can move things a bit forward but then the main changes are non-normative like we now want people saying beware because you might be disclosing information about your system when you send back the challenge this was largely thanks to the discussion we had with you in the lobby of the hotel in Philadelphia and then we added other like hygiene things like make sure that as a client that you don't peek inside of the access token nothing changed from there so and editorials here and there but basically it's very like what we defined at the very beginning as a mechanism is very simple and designed to be simple and do only a small thing and as it turns out it does it so I think that we might be close to move to the next step cool so yeah the next step is we will meet in London in three weeks and online so of course we'll join virtually as well and have some more discussions move some of these drafts forward and see where it goes so we will catch up again after after that and talk about how things went of course anybody who's watching you're always welcome to chime in on the discussions on the mailing list that is always that's the official channel to join the discussions a lot of the drafts do have github repose as well so they are starting to now get linked from within the draft itself some of them are doing that so you don't have to just hunt them down and usually if they're on github it means the editors are willing to have a discussion on github on issues if the issues tab is turned on for them so that's another place to engage otherwise I think that's about it for today only slightly over hour one hour time so thank you all for tuning in thanks for the questions and we will see you all again in a month or so alright bye