 everyone, and thank you for joining my talk. My name is Gil Beton, and today I'm going to talk about Red Team challenges. I will also demonstrate how we tackled these challenges in our team and enable you to do so in yours. Before we dive in a bit about myself, I'm originally from Israel, but currently based in Singapore. Hacking was always part of my life, resulting me in trying to figure out how I utilized technology and science in order to basically make my life easier. I have over five years of experience within the cyber security industry where I started from application penetration tests through infrastructure engagements and red teaming. My expertise lies around enterprise security and the related aspects of it. Today, I work at Signia Consulting as an offensive security engineer being part of its security research team. I'm available on many social networks which I listed here, so feel free to reach out. First, let me give you some context. We have to admit it, red and purple teaming became harder. Throughout the past years, red teamers are struggling with challenges during engagements. This is because organizations lifted up their detection capabilities and also integrated advanced security solutions. This caused the execution of even basic red team tasks to get complicated. Organizations have also a variety of products and vendors incorporated in their networks, making techniques that worked in one organization to fail or get detected on another. Logging and monitoring capabilities were also enhanced. We are recorded 24-7 by the Big Brother team and its SOC siblings, so avoid triggering alerts during an operation became a challenge by itself. To handle the situation, adversaries spent even more time on the weaponization phase, and this is done prior and during the operation. Many times, these tasks are repetitive and sometimes cause delays due to technical issues, and these technical issues we have all experienced before. Let me ask you a question. How many times have you weaponized the same tool? Or how many times have you helped a colleague to use a technique that you found or used? Speaking about colleagues, while working with a growing team that are divided across multiple engagements, we realized that new challenges were added. These challenges include working from home due to the COVID era or back-to-back engagements. Also, new developments that team members created got lost as soon as they finished their engagements. So, we understood that we want to have a better platform to collaborate on. In my opinion, having a base standard can enable equal capabilities across your team members. Now, whenever we do develop or discover a new capability, we have to somehow store it, right? There are many recommendations and methodologies out there, and every day a new exploit technique or tool are released. So, sometimes it's hard to follow and incorporate every technique in your methodologies while being busy with multiple engagements. Security teams are also sharing thoughts during whole conversations or coffee breaks, but memorizing and storing this entire content in an efficient way became complicated. So, until Elon Musk will provide us his neural link, we have to find a solution. We understood that we want to import more automation into our engagements. As we want to reduce the time on repetitive tasks, which we are not really interested in, we know that the community already adapted the CACD Pipelines concept to automate tasks that are related to offensive tool weaponization. Offensive CACD Pipelines have been around for a couple of years, with the goal of helping red teams to automate their tasks. I'm not going to talk in detail about CACD, but we are going to dive into the advantages of using it for offensive needs. I truly believe that we cannot automate the entire team operation as we need to bring our own expertise, knowledge, and way of thinking. We want to have a mind behind the operation who can take decisions in real time, and according to the feedback he receives. So, then you will be able to put more focus on bypassing new barriers which he never tackled before. We started exploring the CACD area and performed a research that ended up with a pain that we really wanted to solve. This pain pushed us to design and develop our own offensive pipeline framework, while focusing on the needs of our growing adversarial team. Such needs include simplicity. As being part of a growing team, we wanted to on-board new members to use that concept easily, and also make it even simpler for us, so the migration will be faster. There is also a need for modularity. The framework must allow the developed techniques to be packaged individually so we can mix between them when assembling pipelines that weaponize different tools. We wanted that the framework will be able to maintain itself, so we don't add overhead to ourselves by maintaining it. We are looking for a system that anyone can contribute to, so the efforts will be gained from each and every team member. This is because we have many engagements and any of our team members solving complex challenges that we can then share back to our offensive pipeline framework. We wanted also that the environment's infrastructure be controlled by us, since the sources and the tools we are trying to recognize considered malicious, and we don't want them to get analyzed or blocked. Thus, having these frameworks on a SaaS solution could create obstacles throughout the way. Also, while performing a retin, you sometimes in a need for a specific tool. This tool can aid you with achieving your goal, and we all know that we may lose the operation when having delays during it. We have to remember that each engagement gets different artifacts, so it will not affect other engagement if the device and the collection of the tools will lose its reputation. But now, considering all these needs, we ended up choosing GitLab as the core of our framework. If you're looking at the high-level description, we may predict that it can answer our needs. And let me explain why. We researched a variety of frameworks, such as Jenkins, CircleCI, GitHub Actions, and AppVio, which served us for the past year, where we learned the power of having CICD concepts within your security needs. These tools didn't really come up with our needs. Even GitLab was not perfect. I actually started going over their source code when I saw a possible constraint. But still, high-level is gibberish, so let's discuss the technical aspects of it. So GitLab started off being code repository version control, allowing you to store and manage sources of your tools. GitLab also provides a restful API, which allows you to automate anything that you can basically do manually. It comes together with a detailed documentation that can save you some time when you try to figure out how to approach a call. A must-have feature is the GitLab CI, providing you with the ability to create pipeline jobs, which I refer as recipes. This is done in a simple and organized manner through the YAML format files. The CACD also offers multiple integrations to different systems where you can execute your job recipes. For example, as part of the CI concept, you need to execute your jobs in an operating system, either Linux or Windows. It can be on a single server or on a container. And having support with Docker and Kubernetes can help you with achieving the goal faster. Jobs can also be executed on specified conditions. For example, on a push that you just did to your repository, or whenever another pipeline just ended successfully, or being triggered by another pipeline. The multiple pipeline support allows to trigger several pipelines through executing only one. For example, when we perform a rating, we tend to use a collection of tools. And we don't want to weaponize them one by one, right? We want to trigger one pipeline that will deliver all of them to ourselves. I believe that this is just the tip of the iceberg. And I'm pretty sure that you'll find additional features to use in the future. Let's see a simple example of an offensive pipeline recipe in motion. The pipeline starts off cloning the Rubius tool, a C-sharp tool, from the code repository. Then the tool gets built using a job that we define containing the dependencies of MSBuild. The compiled binary passes to the next stage where it gets obfuscated using Confuser EX. The confused binary then passes to the next stage where it gets wrapped by a .NET assembly loader, those letting us to execute a .NET tool via PowerShell. Finally, it gets deployed to your favorite bucket so you can download it from anywhere you want. In addition, we also deploy it here to our pondrop server, which is a server that allows you to manage the way you download your files. Another example can go with PowerShell, where we use the tool Invoke Domain Password Spray. In this time, we don't need to build it, but aggregate it from a few PowerShell scripts. The combined PowerShell script is then passes to the next stage where it gets, where it being obfuscated with Schimera, a tool that designed to bypass AMSI and IT virus when obfuscating PowerShell scripts. Then it goes directly to the last stage where it gets deployed to our pondrop server so we'll be able to download and execute it on the targeted environment. In the same way, we may add additional sources of different tools and define the pipelines with jobs that we already developed. And this is where the modularity plays its significant role. For desert, we can use the pipeline triggering options or GitLab API to trigger multiple pipelines based on different groupings. This enables us to weaponize tens and hundreds of tools in minutes. Today, I want to introduce Skellops. Skellops is a framework that empower red teams by enabling them to put more focus on what they need to do instead of how to do it. And this can be achieved by designing great recipes. Let's dive in to see the possibilities of this framework. So after we authenticate to our GitLab, we can see that it contains few repositories. The first one is the CI recipes. The CI recipes is a collection of all the YAML files that contain the jobs that we are using to weaponize our tools. The three other repositories are tools that we want to weaponize. Now, let's say we want to add additional tools. And in this case, we want to add the sharp IDR checker. What we're going to do is that we're going to enter the CI recipes tool, the CI recipes repository, and add the relevant direction for the sharp IDR checker tool. In this time, we will use the Web IDE, which is very useful here. You will see few sections within this repository. The relevant repository for the tools is the tools controller, where you can see the recipes of the different tools we want to weaponize. The tools index contains all the tools that are imported to the GitLab instance. To add the additional tool, we have to create an additional object within this array and provide it with the sharp IDR checker Git repository link. We have to also specify the name of the project in order for the automation to not to distinct between the other projects. And also create a recipe for it so it can be automated with the weaponization of itself. Because its recipe is not existing yet, we have to create it using a new file. And because sharp IDR checker is a C-sharp file that was built seemingly the same to a built-in structure, the same as Rubyus, we can actually copy the same recipe and change the relevant namings. We have to remember which stages are we going to execute. In this case, we are going to build, obfuscate and deploy it. Now all the relevant jobs are included within the yamls above. Now when we commit the tools index, we actually trigger a pipeline that automatically imports the tool. You can see that the pipeline was triggered below and a job was created. We let the job to work and see how we designed it to do it. So under the CI maintain, we included many things that maintain the framework and the infrastructure itself. And in tools import, we have the import public tools job, where it reads the tools index file and compare it with the existing project within our GitLab instance. Eventually it imports the leftover tools. As you can see, the job was succeeded. We also have the API output here. And we can see that sharp IDR checker was added to our project list. Let's trigger its pipeline and see what happens. If you remember, we pointed it at builds obfuscate and deploy stages. As you can see, there are three different stages with job on each of them. So let's understand each and every job. Let's go to the build job, which will possibly be under the CI builders in sharp tools dot yamen. Here we're using a customized Windows container where we created it to contain the MS build and all its relevant dependencies to build C sharp tools. It will build it with it will compile it with the release configuration and eventually upload it to the job artifact. So the next job will be able to pick it up. The next stage is confuse is obfuscating with the Confuser EX, which will be under CI obfuscations. The Confuser EX also executed on a container customized container that we created for it. It starts off fetching the artifact from the previous job and executing the Confuser EX features to obfuscate the compiled binary. Eventually, it will also upload the compile obfuscated binary to the job artifact. So the next job will be able to pick it up. The last job is deploy to point drop. Let's take a look at what it is. It will be under the CI deployers point drop deploy point drop job. It will be executed on a Linux container. And if you noticed, we are actually weaponizing our tool through two different operating systems with different dependencies. And this is done in no time. For deploying to point drop, we have to provide this job relevant variables that it will be able to reach it and upload the files with the relevant access. Since we didn't provide these variables, this job will be failed. Let's leave it here and go to take a look at the multi pipeline feature. This will also be part of the CI recipes. And we want actually to build and trigger the pipeline of the three repositories that we had. So we already made an AD YAML under the CI multi pipeline folder where there are three different jobs that actually trigger the pipeline of the other repositories. When we supply the condition to execute these jobs are when you supply the CI multi trigger variable together with the relevant value. And in that way, we can tag different group of tools in order to trigger their pipelines together in an efficient way. Let's execute the pipeline of the CI recipes in order to choose the relevant pipeline. As you can see, we have the CI multi trigger variable here, which execute multiple pipelines. So we want to execute all of them and they all have the all term. So we just write all. And since we want to deploy them to our point of server, we have to provide its URL and also its right key. We'll copy it and enter into the variables. We can extract the right key from this green button. Don't enter this. Just take the right key. Now we can run the pipeline and see that the relevant repositories pipelines were triggered directly from here. We can see that PowerApp SQL was triggered Rubius and also Godi. These tools are made from three different different languages that we wanted to show you. Rubius passes through the build office case and deploy in the same way we did with Sharpie the outchecker because we copied that PowerApp SQL goes through Shimera and deployment and Godi just go to get built and being deployed. Now we'll wait for the pipelines to finish in order to see what happened. So green indicates that everything was done successfully. Let's take a look at the output of the PowerApp SQL jobs so we can understand what really happened. We see that a lot of obfuscation values here and we see that it also uploaded the artifact for the next job. Here in the pundrope deploy we'll be able to see that the job succeeded and we can see also the response from the pundrope. It means that all the files that we just created, all the pipelines should be right here deployed. Let's change the way that Shimera is being downloaded and take a look at the file. So as you can see all the strings looks obfuscated and randomized even the functions name so looks very useful. The last thing I wanted to show you is the docker files. We are actually storing our customized docker files within the maintain folder where there is a job that that can pick them up and build build them on top of another container. This is only supported with Linux and is maintained through a Google project named Kaniko. We actually can take this docker file and build it through our pipeline managing all the infrastructure and this framework as a code. You can see that we have a special variable to trigger that kind of pipeline. If we'll take a look at the CI recipes pipeline and try to trigger it, we'll be able to provide the name of the docker files we want to build and push. Docker file build Linux is the name of the variable and now we will enter the name of the docker file, the prefix of it that we want to actually build and push. As you can see a new job was created named build Linux container and this job is can be found under the CI container builders where it actually uses the Kaniko project. Eventually it pushes the container to our private container registry. Great. So hope you enjoyed the demo and after we've seen all the magic, let's understand what is the infrastructure running behind the scenes of this framework. So we start off having the GitLab instance and this GitLab instance come with the built-in CI CD. To execute our jobs, we are using Kubernetes cluster where we have two different node pools. One node pool for executing Linux related jobs and the other one for executing Windows related jobs. In order for the Kubernetes cluster to communicate with the GitLab instance, GitLab created something called GitLab Runner which is a help deployment that you can deploy to your Kubernetes cluster which will act as a proxy between the GitLab instance and the Kubernetes cluster. It will receive jobs from the GitLab instance and instruct the Kubernetes cluster how to execute them. We also created another GitLab Runner deployment that is responsible for the Windows related jobs. Our Kubernetes cluster is connected to our container registry where we are storing our customized containers to use during the operation and the pipeline execution. Now having this framework on-prem can be nice and great. Let's assume that we can also shift it to the cloud and using in this example Google Cloud resources in order to host it. And in this time we created the Kubernetes engine together with Google Container Registry which communicate perfect where we attached service account with the relevant permissions to push and pull containers. The Kubernetes engine and the GitLab instance can communicate internally because they are sitting on the same VPC. We also added a Google Cloud storage to allow ourselves storing some utilities that we will need during our pipelines. We also created a firewall to allow us to operate the framework and use it and actually enjoy it without exposing it to the world internet. Everything is sitting on a single GCP project where we can maintain it in one place. The GitLab then can import tools from remote Git repositories. And as part of the Scalops framework we are releasing a terraform script that will allow you to deploy the exact same environment in your cloud. This comes with the built-in recipes we've just shown before. All you need is a GCP subscription and a web browser. Refer to the project's repository and follow the instructions. A few words about the cloud costs. We can divide it to idle and to per job because we want the framework to be waiting for us so when we want to operate it and run the pipelines. But we're not always continuously running pipelines. So there is a need to operate two instances that are utilizing most of our credit. Also there is a per job credit that takes out when you provision new nodes. And this is very tricky because you may provision one node and create one job which will translate it into one pod. But if you create 10 jobs simultaneously they will use the same credit. Unless you plan to supply weaponized tools to the world community the bottom line is that you have to pay less than 100 US dollars a month in order to use this framework. Additional thoughts that came up to my mind during creating the framework and also this presentation are that this can be a community driven framework. We just released the infrastructure, the code of the infrastructure and also the CI Recipes repository itself allowing people to collaborate and share their techniques through one place where anyone can enjoy and share. This can be done in the same way people are sharing today cobalt strike aggressive scripts. There will be another problem coming up to you after using this framework because now you will be able to speed up the tasks that you're doing when you're performing relative engagements. Finding yourself collecting all the enumeration reconnaissance, a lot of information in no time so you find yourself trying to understand how you process all this data now. Also if your team will really take the decision to use this framework in an efficient way you may end up finding an operator executing a task that bypassed few security tools, security defensive tools where the operator will not even know how we bypass them and I'm not into not knowing what you're doing but this is a thing that can happen which may enable additional people to perform adversary simulations and red teams. Also I believe that the question about command and controls came up to your mind and we are not planning on replacing them with the offensive pipelines but we do have to use them together because command controls are very monitored tools by the detection and prevention security tools and also they are not getting enough updates so you may find yourself using an old update of some tool and trying to figure out how to load a new tool. Now also with the offensive pipelines you may find yourself grabbing your beacon agent or grant from your favorite command control perform the obfuscation and evasion techniques on it send it back to some hosting server so we will be able to download and execute it on the targeted environment without getting detected. I also listed here the references to the technologies that this framework leans on. You can go ahead to extend your knowledge about every byte and bit that actually created this framework. I want to thank anyone who took part in designing this framework and also for anyone who helped me with preparing this presentation so thank you very much and also thank you for staying up until now. I hope that you enjoy the talk and consider to adapt CICD concepts into your ratings. I will be taking your questions, feedbacks and comments on the Discord server. See you there, bye-bye!