 Our agenda will be the first introduction. We will introduce ourselves after you do and after that we will look at the Red Team side of the things and the DevOps side of the things and at lastly we will meet them together in an example and after that we will look at how can we maintain our automation codebase. So we are both Security Engineer Center and YOL and member of Black Box Security and ITCAT. We are frequently blogging at Trentul Tech and we are performing Red Team and Zero Day Research at our leisure time. So what we are doing is we are planning and Red Team engagement and with that plan we are designing our infrastructure and iterate over it with our team and once it's stable we're automating our infrastructure design using Terraform and after that if the operation will take longer than usual or we will iterate one of our infrastructure automation scripts we will create a pipeline for change management so that we remove human error altogether. So the Red Team graph cycle looks like this usually you're doing external reconnaissance and once you compromise a machine you do the same on internally until you get domain admin and domain dominance and after that usually the company has some target machine and target file to exfiltrate and until you exfiltrate that file you continue your loop of reconnaissance and remote code execution. The operational models are for Red Team or like this the first one is full scope annotation test although it's a controversial one you can you can count it as a Red Team because you have Black Box scope and you're attempting to gain a foothold into the target and the stale steal the data so it's really kind of target attack and it provides although it provides a data point about the state of a security program it needs lots lots of time and resource. The similar one but the different is the long-term Red Team operation it really needs you to work towards the company like your actual attacker for a year or two and it have advantages of the for the company that you you will map their own network and the key images so that they can harden their security posture throughout the time and the hardest part for a Red Team is you have to maintain your access over a long period of time. The other is the classic red versus blue war games it will strengthen the blue teams muscles against the red team activities to techniques and procedures and also red teams muscles against the blue teams in in terms of they will see how the blue team handle their attack etc so the the war games are usually in an simulated or some scenario based environment and they are really good to to have time to time within the company itself and the last one is adversary simulation in this in this adversary simulation scenarios they are based on a some APT or mock-up scenario based on a realistic timeline so you're basically imitating some APT attack or some kind of scenario that the company will look like would like to assess so the goal is to exercise that attack without getting identified or discovered by blue teams so these events are really good for the company because the state and the scenarios are step by step so they can see where they fail or where they are succeeding at every step so in the light of our operational models there is also operation security it's a term from the military and it means that deny an adversarial information from the opposite site so as a red team or you should be able to deny the any information that you you might you may have or from the blue teamers on the other side in the in the wild some detectors have operational security mistakes throughout time and let's get the recent errors and first example is just botnet they discovered they were discovered and boxed because of the not encrypted their c2 servers and the chat sessions the other example is forceful it's he's a russian botnet developer and he gave up his c2 server used to carry out the data stacks so he also got exposed and the last example is the recent one is ibm researchers found out a huge repository that got exposed due to a security setting misconfiguration so if you look at these examples you can clearly see that the security settings misconfiguration and also some some error due to the c2 or the other elements that might be have human error is compromising operations so it happens all the time so that's why actually we need an automation infrastructure design for the standard meditation setup is like this you usually have some team server and you're directly accessing organization throughout this team server and usually the team server's assets IP addresses or domains are widely stood by the organization to for penetration testing to test more clearly but in the red teaming setup you usually have different time of servers behind behind some kind of redirector whether it would be traffic or engine x or haprocessor etc so you have multiple servers behind the redirector the reason for that is you can easily build defense and recover your infrastructure against exposures from the blue team that's why the directory approach is more is more blending into the organization's traffic when it comes to the operational side of things so when we design our infrastructure we should be able to consider few things the first one is the leanless of the our infrastructure because we want smooth operation and complicated infrastructure means more maintenance time or more time to more things to look at so it's the last thing that we don't want second is the segmentation your tools of choice must be segmented into the according to the functionality and the red day must be accessed from the director not to be directed directly because of the exposure risk from the blue teamers so the these three uh gain us interdependence for our infrastructure every part since every part is interdependent from each other if some of them some of it got exposed we can quickly uh destroy and set up again new one and the network footprint from the redirector is a things to be considered so that we can imitate some real traffic from our redirector to the company so blue teams are uh don't find us that easily and also the the engagement domain is also important whenever we plan an operation to a company we always choose a domain from that company's domain category we at least you should try to select it that way and the payload and C2 specifications are important for the for the opposite side because you should you can it's a ease your operation more truly if if you will use phishing you should be able to consider the the three elements when you design your infrastructure because you might need then some saptp mail server or maybe third-party service if you will use and also domain fronting if you will use it as well you should consider it once you you're starting your design and last these access control and request processing the you should always exercise access control for your infrastructure and more importantly the request processing because if you have some kind of request processing mechanism on the redirectors you can relay the blue team that if if your redirector ip or domain got discovered and blue team were to visit that domain or ip address or were to map or some discovery run some discovery you can redirect them to or relay them to the actual domain or the website so that you can avoid their work discovery work so this the the last the last thing is the one of the most important things when you consider your design so let's get the devops side of things devops is mainly is creating self-service infrastructure for teams and since its self-service and contains automation it removes human element and which also removes manual and slow procedures so that you can continue work continuously and more lean lean and fast so why we're using automation for every red team engagement since the all engagements are most most of the time are unique you have to plan and design an infrastructure and you need to set up it from you are your some best script or some kind of means and if you do that by hand if for every engagement it becomes more error prone and slow procedures and it's boring so if you're using automation so for automation we're using infrastructure as code it's it helps you to define provision and manager infrastructure so if you codify all of your infrastructure you can track version changes or also validate changes and remove all the elements from the infrastructure deployment procedure because because it will be done from the code that you're writing so the provisioning and deployment process automation is key in here for infrastructure as code there are a couple of tools chevpuppet ansible sas stack cloud formation and terraform are common tools for develops practices but why we are using terraform terraform and except cloud terraform and cloud formation the other tools are mainly a configuration management tools and designed to manage existence of it but cloud formation and terraform are provisioning tools although they have some little degree of configuration management capabilities they are mainly are for provision the infrastructure itself and as you said as the other slide states cloud formation is mainly for AWS so terraform has multiple cloud support so we are using terraform for it and if you couple docker with terraform for the configuration management all of your needs will result itself anyway and and also why we are using terraform is also have another reason in the also there is a term called configuration drift it happens when you maintain infrastructure over long period of time they differentiate one another in terms of the software version etc so if you're using terraform with docker you reduce the likelihood of these differences because the average changes need is actually mean a new deployment and another thing is terraform have declarative style of coding you can give an example for that is if you have 10 instances and you you have to automate it in ansible you're automating it like this and in terraform like this but if you will have one more instance on top of that 10 you have to rewrite your ansible to account the count is as one in the on the other end in terraform at it's 11 and terraform will take care of the mathematic itself so you don't have to write new ansible you don't need to write useful scripts anymore because terraform with terraform you can update your existing one and it will keep working so the the last part is the uh samples require master and agent to operate but terraform is masterless because it directly talks with its apis so that it removes the need of master server and agent which will lean more make our infrastructure make more lean and the design one of the design concentrations that we have is the leanliness so terraform also suits with it and in the light of this we have three common combinations when it comes to build our and automate our infrastructure the first one is terraform and ansible for provisioning and configuration management but as i've said for red team operation we need as lean as possible so ansible might not work in that scenario because it requires master server and also some agents operate the other is if you're using the some virtual machines you can use pecker to template your virtual machines and deploy it with terraform and the last and our recommended approach is using the current kubernetes to orchestrate your infrastructure assets and deploy them with terraform so that everything is to be taken care of with kubernetes and also some all nearly all of the cloud providers have managed kubernetes service so you can also take advantage of that and leave the management site for kubernetes to them and you can just deploy your docker with the configurations that you want and leave the management site to them so that you can operate more easily and let's get the demo hi everyone this is shaller today we are going to demonstrate building a red team operation infrastructure in a ways we choose the vs but you are free to use other platforms as like alibaba or google cloud platform or your own private cloud our demo has four phases inspecting scripts run scripts and build infrastructure payload execution and destroy in our domo we just create a simple network topology but you can extend it with your needs or requirements let's take a quick look to our terraform and helm scripts in our repo we have four important terraform script the first one the pc tf file is led to provision of logical isolated section of the a vs cloud where you can launch a vs resource in a virtual network that you define you have complete control over your original networking environment including selection of your own ip address range creation of subness and concretion and configuration of food tables and network getways eks cluster file is that you create a kubernetes cluster with one worker in our demo we deploy nginx ingress and metasplaced with managed with aks helm file is using for deploy nginx and metasplaced to worker the last one is security group this file is using for access control of communication server and services and also we have a file to see outputs of our scripts for debugging let's start building our infrastructure with terraform first of all if you are not using root account you have to create a user and attach a policy to access a vs services after user creation you should add credentials to your profile let's initialize our script and well date it seems everything is fine just delete it seems everything is ready for ready now let's check status of kubernetes classes notend ports let's execute our payload now we are ready to destroy all infrastructure everything is clear now thanks for listening now let's get the main transport so as you see we are to validate deploy and destroy at every stage of our testing so we can automate it with kitlab ci and for it we use init and well date first to test our scripts head check and after that we are using apply to deploy it and if you were to deploy we should we should be able to get the state so that we can destroy it afterwards and after deployment the last part is the destroy but it should always work in the pipeline because if you were having some if you're having some problems with deployment your some of your assets would get deployed but some of them are not and if the destroy stage does not work these deployed assets will be online all of the time so we should be we should be used destroy part always in our ci work always in our ci with the destroy command and it will destroy our infrastructure after testing so that we can validate our codes and we can we can also see if we if our infrastructure got deployed correct or not so main takeaways from our research and the work is choosing correct tools are key whether it would be some engagement tools or automation tools you should always choose whichever you are comfortable with and for every red team engagement where you're doing some long-term plantation testing or red teaming or adversary simulation do not make your infrastructure complex try lean and interdependent infrastructure try to build and the domain choice of the things as we mentioned before do not make your infrastructure complex so that you can manage it easily and at last use automation and ci so that you can test and validate all of your infrastructure so that you don't have any surprises during the engagement itself if you have any more if you have any questions we will be on discord throughout the day you can ask it and as I've said we will share the link of the slides in the discord as well so that you can also reach