 Okay, welcome everyone. So we'll go ahead and get started. Thank you for attending our session here where Nathan and myself will give you a rundown of building security frameworks into your CI CD pipeline. So a little background, my name is David Brock. I'm the product manager for the Compose platform here at Allstate. Hopefully you guys were able to stop by our booth in the foundry at some point over this conference and get to get a rundown of Compose. I'm here joined by Nathan who is a senior manager of application security and cloud security with an enterprise group at Allstate. And what we're going to talk to you today is how we are partnering together to kind of bring the enterprise security into what we're doing here with the cloud and how that partnerships allowing us to automate things and build a stronger platform. So with that I'll hand it over to Nathan. Alright, thanks David. Everyone, thanks you for coming here at the end of the conference. The last session where security belongs right at the very end. Appreciate it. And you know to actually talk about here in a little bit how we're gonna basically turn Jenkins over to security and and put you know laws into Jenkins. I'm joking. We're actually going to talk to you about that partnership where security is actually embedded but the product team is responsible for being a delivery organization and actually delivering product to market. So we're gonna go through some of that at some level. We're gonna get technical and actually look at some code. I won't go into too much detail on that but I want to show you guys conceptually how it's done. But we're gonna kind of keep it a high level to show you the full process stripe. So security is a huge domain of topics to get through in about 40 minutes. So as Dave mentioned I'm the senior manager of application cloud security at Allstate. My background I've been in security about 16 years and I've been primarily focused in application development and agile environments both scrum and XP methodology. So as we go through this you're gonna see some stuff that's kind of you know that it looks like a scrum environment. Some of it looks like an XP environment. A lot of it can be applied in either way. So just kind of keep that context at a high level that this is an agile methodology. So okay so high level this is a process flow with kind of a security perspective right. You guys are familiar with this in an agile world we have you know product managers who are coming up with you know requirements and acceptance criteria and kind of adding that to a backlog and then you have the various phases of portfolio prioritization, user story, breakouts whatever it may be where we're determining what's important and what we're gonna work on. And then all the way down to the developer level down here where they're planning and actually coding in five and six and then seven and eight. This is where we're really gonna get in the fun of it. That's where Jenkins at right and that's where we're gonna get in the meat of continuous audit and continuous inspection. So this is really about how do you get continuous audit added to your continuous integration pipeline. So we're gonna go through the stack basically the journey throughout the presentation. All right so the first thing is defined security right. That's all we hear that's what I hear a lot from developers you're just like you know what are my security requirements and security is like ah you need to be secure. What does that mean you know what does that mean at the operating system level what does that mean at the infrastructure level what does that mean at the application level right and as we move more and more into a platform society and DevOps some of those line site to blur on who's responsible for that right. So a lot of times you know requirements are assumed to be taken care of by your enterprise you know infrastructure guys because that's the operating system but security has no context of containerization they don't know what RunC is they don't know what Docker is right and that's another layer of abstraction that still needs to have security requirements kind of applied to it. So that's why what David said the partnership is important because you can't secure something if you don't understand it and what ends up happening is you have requirements that were originally intended for a traditional environment that don't make sense but somebody says I you have a firewall in there that checks the box on the compliance but it doesn't necessarily mean you're more secure and that's the biggest difference there is a difference between being compliant and being secure right and I'm interested in being secure so for security to partner with the developers actually understand the environment what controls need to go where so we don't end up with something like this right and this is you know traditional we run into this all the time of you know we don't allow our operating systems to talk outbound but you know within the containers have the ability to you know have Jenkins talk outbound and you know grab that that information pack it into a container and deploy it anyways bypassing the security controls which doesn't help the developer or the product or security right all right so this is how we define security I've been in security 16 years you guys ask me how do you define security I'll say I don't know much like your world it changes every day in my world you guys have new technologies new you know products and services you want to try out that meets customer needs the same thing with security our threat landscape is constantly changing our vulnerabilities are constantly changing and I can't keep up with it all these are my problem statements these are my user stories right these are what guides us and if you guys really think about it at the end of the day where do these standards come from NIST National Institutes of Standards and Technology it comes from a law called FSMA Federal Information Security Management Act let's think about where laws come from right the legislatures legislators are voted on by the people and the people themselves at some point in time were concerned about the security of their data and the companies that were you know supposed to be entrusted for taking care of their data and that wasn't happening so the people not through a product manager but the people through their legislators came up with these various standards or asked the legislator to work to come up with these various standards these are user stories these are user acceptance criteria they're not compliance so if we change our mindset a little bit and think about these is this is the voice of our customer it's an unspoken voice it's a expected voice you're not going to directly ask for the security requirements but that's all they are they're still requirements from our users and we're going to take you down into turning these into code alright so the first step is up here where we have product managers you know kind of looking at what that acceptance criteria is what the feature of a minimum viable product is right I'm going to take you the key with viable for a customer include security right so how do we define security within that viable MVP and the definition of that and that's what's missing and a lot of what I see in the enterprise is securities at the tail end when you're just getting ready to go to production you're just getting ready to release you have all your commitments you have everything lined up you think you're set then you have your pen test then you have your vulnerability scan then you have whatever it may be and you're that's when you find out that there's a whole bunch of stuff missing we need figure out how to get that to the front end of the pipe and how you do that is through partnership having security be a product delivery organization be there at your inception that's your user story breakouts defining scope for minimum viable product and that's the first phase we're going to talk about this is a user story right here given when then right Gherkin syntax so those laws those standards and this code but whatever it may be those are problem statements that you can actually translate into that Gherkin syntax and put into a given when then statement and here in a minute you're going to see how really powerful this is if you couple this with test driven development but in this case we have a basic you know privacy system message everyone saw that you know I've seen that you have to have privacy policy whatever it may be that we're going to collect X Y and Z information from you and down here this thing doesn't have a laser and I'm sorry for the resolution but down here you'll see that this is mapped to NIST controls so in this particular case it's NIST AC 3 4 5 and 6 right so we're actually giving user story mapping it back to a compliance requirement so when that gets down to the developer and the first thing I want to talk about I guess we'll step back we have user stories for the application we can have user stories for the application or for the operating system or the container right in this case we're talking about actually you know providing a denial services hack story at the network level and then you can have one for the database so sorry for resolution the point here is not to go through every NIST control because that's super boring I enjoy that but I also get to turn into the code so that's what makes it fun for me but in this case you have all kinds of acceptance criteria for the definition of a secure database right inside these NIST controls all right so now the other big thing is when you get into documentation right how many people heard where's your governance documents where's your standards where your policies your procedures right those aren't you know traditionally of things that XP your agile scum teams are delivering one would think but in reality you guys are creating a lot of readme's you're creating a lot of documentation for how you go about doing things to make sure it's predictable repeatable that's the name of agile right is you want to be lean fast automated and a lot of it's predictable repeatable well those documentations whether they're developer docs readme whatever it may be those actually constitute governance and compliance documents so your API docs is another one right those are standards for how to interface with your application in a secure way and a lot of people don't think that those map directly to those NIST controls which are usually the first two in every NIST family is a policy or a procedure or a standard you're creating those you just got to you know get a security person that's part of that product delivery team to put a pretty wrapper around that actually communicate that to internal audit to your security team whoever it may be all right so we're gonna get down into the actual paying attention time we're gonna get down to the actual iteration planning right so this is where the developers they have the user stories from all the product managers and we've mapped that back to the NIST controls or COVID or PCI or HIPAA whatever it may be right all these laws are just user stories we're gonna get into TDD so if we actually take the features that comes from NIST that the user has requested through us it just came through law and actually map it back to a NIST requirement and start to identify the feature the background scenario this is part of test driven development again this is business language right so this is something that an auditor can look at this is something that your internal security people can look at and if you guys how many guys are familiar with fitness or cucumber BDD driven framework so that's what I'm about to walk you guys through so this is business driven development here we're actually taking those those laws and those standards and turn them into BDD so this is the next phase right so we're getting down into actual executable code and the next slide we'll talk about it and I just want to give you guys a chance to look at this so we went from this high-level business statement we're taking it a step down right working on the step test at this point and then finally what you end up with is an actual functioning code that's mapped back to a NIST standard so how often how powerful do you guys you think this is from a from an agile perspective if I'm a waterfall organization actually the last time you had a NIST audit you have a compliance audit they're going to say oh last quarter say oh maybe last year last quarter best case scenario because those things take forever guess what the scrum XP agile guys get to do had a build that ran five minutes ago here's all 635 tests that are associated map back to NIST or PCI or HIPAA or whatever it may be and here's my build that it's green and more importantly your standard audits those important point-in-time pieces of paper they do nothing to prevent insecure or uncompliant code to get to production but in this scenario if somebody writes some code later on that you know happens to violate some security tenant that we had previously defined that was an acceptance criteria whatever it may be it'll break the build and prevent that from going to production that's powerful that is a super powerful thing to be able to attest to and to be able to talk to oops sorry about that this is just another example of a unit test it's kind of map back to a NIST control up top using tagging that's the other thing you can do is kind of search your code and actually report on how many tests and what NIST controls that they're mapping back to so continuous audit on the platform have you guys heard for CAS benchmarks so I'll give a little bit of background yes I'll give a little background on that so we get a lot of questions from a security what's the definition of a secure Ubuntu image what's the definition of a secure windows image you know what's the definition of a secure red hat or secure engine X whatever it may be right and there's a group of professionals in the security industry I'm one of them that kind of participate and collaborate to define what those standards are now are they perfect no they're meant to be more of a guideline but at the end of the day you have a CAS benchmark standard for you know defining what a secure image would look like or a secure you know web hosting platform may look like and I've heard rumors that there may be some stuff going on specifically Cloud Foundry on that but we'll see in the future how that goes but in this case you have centers for information security that have actually defined those benchmarks and what's cool is with the advent of chef puppet Ansible people have actually taken those those PDFs for those XML files that are published by CAS and put those into executable Ruby or Python or code whatever it is that you're using chef puppet Ansible and you can automate the deployment of a secure operating system and couple that with something like test kitchen you can actually again provide test-driven development again sure operating system infrastructure in your platform infrastructure as well not just the application so I'm going to pass it off so I've talked a lot I'm going to pass it off to Dave a little bit to talk to you about on our Cloud Foundry platform how we're actually putting this stuff into play yeah thanks Hapen so in this scene here you see a red build but what you're seeing is a automated scan of a stem cell that was pushed to PivNet was pulled down pushed to an old build pack in Bosch and we have some configuration to give a user and then using a Ruby gem that we partnered with Nathan to provide us we're able to do a scan of that stem cell and actively see the results now this you see there are some failures but the other thing that we're partnering with is not everything that it's identified as a failure makes sense in this cloud environment so we're work to define a filter so that as these builds come up we're notified of the things that matter in our environment and then we can choose to push to the rest of the environment and at that time we now have documentation that this was done in a past or it failed and we know why so this is something we're actively doing and we're expanding because not only are we interested in doing this for the applications that run on Cloud Foundry but the platform itself thanks Dave and I'll talk to you a little bit about how we're doing that and again Dave the Cloud Foundry expert you can kind of pull that together but the security tools in the security industry are evolving just like your tools are right so they have APIs security department may not know that or know what APIs are but as we move into this more continuous integration continuous automated you know continuous deployment environment you can leverage Jenkins to call your static analysis tools to call your vulnerabilities tools to call it metaspoil automated pen testing frameworks right and produce artifacts on the fly and give developers real-time feedback and also break the build obviously if new intro new vulnerabilities are introduced I will caution on that to select you guys know as these tools are also getting feeds obviously from the vendors themselves so a developer may check in code to add you know a widget whatever it may be that they want to add to the site and at the same time a new public vulnerability has been detected and published by the vendor right and the developer happens to commit the code just at that time as the signatures came down and the bill breaks but no fault to the developers because of maybe a third-party library you know that's sitting in their code some vulnerability that was detected in spring so you need to have the framework on the back end to provide for exceptions and those can be as simple as you know a JSON template that is documenting exceptions within the source code that you know just requires a simple pull request for a developer to update to get their build green and just make sure you add something to the backlog that can be prioritized like everything else to go back and fix that you know vulnerability that you found or that the the security tool found so those were kind of some of the more difficult things because your security people traditionally oh you have a vulnerability no don't deploy and be like you have a 30-day patching cycle I have a two-week patching cycle because I can get into my next sprint so you know I'm still going to deploy this code because I didn't introduce that vulnerability so those are the type of conversations you need to have and it's just an educational journey is really what it is so so we're going to get into a little bit box seven and eight this is where the continuous audit comes in it's really about hooking these various security tools up to Jenkins one of the things I want to talk about and Dave will talk to a little bit about is how we're utilizing deployed Actal which I'll let Dave talk about that and coupled with security standards such as OOSP and NIST to actually provide this automated gating protocol as well as many other things in our CICD pipe so talk a little bit about deployed Actal. Yeah so I know it was announced last year but if you have not checked it out you should. Deployed Actal is a tool that we've written and contributed to open source check out our GitHub project that allows for automated deployment blue green deployments to multiple foundations now our architecture is very complex so having a developer have to deploy to each of our foundations and check them and we're about change records and all that is just a lot of overhead and burden that they don't need to worry about deployed Actal is our solution to that allows for blue green deployment across multiple foundations so that they know that they've got the environment running exactly how they should across the entire production environment so as Nathan's talking about is we can leverage that and extend that have deployed Actal or conveyer and interact with other components and and other security tools to bring those in and provide that validation as well. Yep so internally just to make sure I don't get the names across we call deployed Actal conveyor internally so as you hear me talk about conveyor that's essentially what deployed Actal is but so my team came in and kind of you know worked with the platform team to kind of identify and really understand again the environment and how things are being deployed so we don't create you know a gate that just kind of has things going around it and identified that deployed Actal really looks like a central centralized point that is responsible for deploying over their cloud boundary infrastructure right so how powerful would it be to leverage deployed Actal to basically go off and check to see if the appropriate security artifacts were in place either required from a compliance perspective again I don't like that because I'm more looking at risk or you know from a security perspective that that the minimum acceptable risk has not been exceeded before deployed Actal or conveyor pushes that out over the production instance so in this case we're we're hooking deployed Actal up with with various tools so the first thing I want to talk about is licensing right so how many people dealt with in an enterprise you know legal and different people you know concerned about a pati versus MIT things like that right it's a very difficult conversation developers you know need to move fast they don't have time to sit through and read every piece of code so we've got some open source technology I'm sorry we've got some commercial vended technologies like black duck that has the ability for deployed Actal to actually reach out and validate that there are not any licenses that you know violate the existing licenses for the enterprise and that can be hooked right up with Artifactory Pro as well to be used when the developers trying to pull down that software the other thing that can be used if you want to do more of an open source cheaper out we have things like Maven that is already doing a lot of this information right so through some creative scripting whether Python or whatever you like to write we can actually start to produce those artifacts and storm in a central location and ask conveyor deployed Actal to check that that exists and parse that whatever it may be right so that's really how we start taking these traditionally manual processes where legal whoever had to review these things and you know manually approve or sign off on some form and try to and start to automate that within your CI CD pipe the next thing is automated pen testing so anybody familiar with W3F yeah so W3F is a pretty powerful dynamic analysis tool it does require a little bit of upfront configuration for each application because each application is a little bit unique but the developers themselves and the scrum team partnered with a security person can basically create a configuration for their application and stored in their source code and when they go to deploy to production again you can have W3F running inside a Docker container that you basically pass command line to along with that configuration file and say hit my application let me know if I introduce any new vulnerability and if I do give me immediate feedback so this is where you start to really get automated pen testing built into your CI CD pipe and again you can have deployed Actal check for the results of this and either say hey they did it or hey it found vulnerabilities don't deploy so there's any number of scenarios gradually over time that you can introduce you don't have to break the build day one in fact I encourage you not to break the build day one right hook these things up start gathering some stats and some data it should be a development tool and useful for the developers before you start breaking the developers pipeline because they really need to understand what's happening and more importantly how to fix it or who they can go to for help the next thing is a vulnerability scanner and this one's probably one of my more passionate things because people traditionally think a vulnerability scanner is just going to find vulnerabilities but it's much more powerful than that a vulnerability scanner has the ability to actually get into your box and really profile it and create artifacts on a number of different things that are important to security and compliance and the product manager you can create and automatically profile the users that are on the system and create that with every build and maybe do diffs in between them to identify if new users you know have been added for whatever reason you do it for the groups and the users that are in the groups a list of all the installed software you know RPMs that be in package or whatever it may be and bad configuration again your vulnerability scanner those CIS standards that we talked about your vulnerability scanner can really measure you and compare you and score you against those CIS standards and give you information on what ones you're meeting and what ones you're not okay good okay the next thing is not many auditors not many security people are not gonna want to go to Jenkins you know to view all these artifacts right in fact they get questions all the time what is Jenkins you try to explain it it's just you're not gonna get anywhere these are tools that they're used to seeing things in so RSA Archer is a what they call a GRC tool governance risk and compliance tool where from a security standpoint we can load those templates for those laws that we talked about NIST cobit whatever it may be and it's an easy way for them to map requirements from a legal compliance perspective over to solutions and then also measuring those solutions effectiveness right what are artifacts that we're testing proving that they're in place and they're not being exploited whatever it may be again tools like this are coming out with API's your security people may not know that your compliance people may not know that and and again we know what API's are and how to integrate with that so all these artifacts we talked about we've been talking about giving a developer view for right now but if the tail into your pipe right before you deploy you can work to deploy that artifact or test or whatever it may be out to RSA Archer again you're giving them a real time feed into residual risk something that they've never had before they're used to every quarter doing an assessment or some sort of penetration tests and that's the best case scenario a lot of times it's a year because pen testing teams you know are not very big and they're usually backlogged in an enterprise environment so if you can start to automate some of the stuff that they do and then take the results that they usually manually upload through a UI interface and start to automate that with Jenkins now you're starting to give them something they have never had before and it's a real time view into residual risk at the enterprise at multiple levels not just the application but the infrastructure as well as the operating system layer this is just a view from RSA Archer and again this is what they're used to seeing with their compliance and their scores and everything like that as you guys begin to add those artifacts they're starting to get a dynamic view which will really change their operations at that point and to be honest they're going to start getting a live stream of data that's going to force them to figure out how to be a little bit more agile right at this point they're saying hey we're not getting data they're not producing artifacts but as you start to give that more and more it's like okay now I have too much information how do I deal with this how do I start to do agile risk management agile risk assessment which that is a really cool place to be for an organization so something I'm working on and a couple things I forgot to mention this earlier so the the gym that they are using for hooking up into deploy deploy-a-dactyl it's specifically for rapid seven expo so if you guys have rapid seven expo within your organization there's a gem out there called the next post runner that you can actually download and it's basically command line access that allows you to do automated vulnerability scans in this case it's another one we've taken the actual whole NIST 853 which is about 385 security controls and actually you know jsonified that whole thing and that becomes really powerful if you guys are in a situation that you have to create system security plans and different things like that you can actually start automating Jenkins to populate the different controls which are essentially your artifacts from your build and producing automated system security plans so I know what's happening here well it's just jumping so anyways that's where we're at if you guys have any questions we're welcome we've actually got a full crew all state people here that can talk to you as well so thank you for your time Dave so thanks for joining us we'd love to hear your questions or talk to you see what you got yeah yeah so just to repeat everyone if he didn't hear what he said basically he's saying that the the timing of some of these scans take a while to run especially like W3F and sometimes our static analysis and how do you keep up with the pace of velocity continuous deployment while developers have to sit and wait for a scan to get in place and he's absolutely right that's a challenge that we deal with and one of the things I guess from our perspective we're looking at is what are the tests that we can run upfront that are really really quick and have those be part of basically you know automated gating to get into production and what ones take a little bit longer that we're going to run after the fact right and and things like a fuzzer for example I didn't talk too much about a fuzzer but sometimes a fuzzer can take days to run depending on the size of the application that isn't necessarily needed for all applications one of the approaches you can take is if you have you know one of the things in this is to kind of classify your systems you know this system is a highly critical application to the organization you may have a different policy and process for changes for that application saying it may require a fuzzer just because the sensitivity of that application where systems that maybe not are as sensitive as that one we do that after the fact after the deploy and basically true up if it finds anything at that point I think that's individual with each organization and kind of the risk tolerance so and then within the company you're gonna have a different risk tolerance based off the type of application so not always are you gonna run every single test I guess is the answer it's up to each group so yes sir I love it that's where good ideas come from spider doc I may not be oh oh swagger documentation wow I don't know the answer that question from a fuzzing perspective so traditionally what what a fuzzer is going to do is look at your inputs either through API's or or on the application itself and just nail it with an obscene amount of garbage to try and get 500s and stuff like that so yeah I had not thought of that so I'd like to follow up with you on that I guess so yeah I don't yeah I'm learning every day and to be honest you know everything that you guys see up here is because of partnering with developers sitting next to them and being open-minded enough to say yes I've done this this way for 10 years now and throwing that away and just saying okay guys teach me Jenkins teach me you know whatever it may be and you know conversations like this is where this has been derived from and it's not perfect it's still a work in progress their third times were you know still today that that we're challenged together and getting better at it so yes so yeah we're obviously working on that that journey and figuring out some of those things as we speak right now but what you're getting into is kind of like software-defined infrastructure at that point right and in this case Jenkins could and should still be used to control that that RDS right Amazon has API's just like everything else and that should be a software-defined deployment and if you do that you can actually you know create test-driven infrastructure that point to look for certain configurations for my sequel.com that's stored in the source code that that your Jenkins jobs uses to create RDS so the same concept you see here apply for the the database you know user stories and everything if you're using software-defined infrastructure deployed of the cloud this should still work and it has I can speak to it that I've done it before now I'm relatively new at all state and we're going through that journey here so I'm gonna learn a new way to do it I guarantee it won't be done the same way that I've done it past but yeah that test-driven infrastructure is really powerful yes sir yeah yeah so it's definitely balanced like you said security's got certain objectives and I think one of the challenges more of a communication education on what are the problem statements we're trying to solve with that and then security at the same time what are your problem statements so outcome-driven stuff not solution statements like this is what we've done all the time so that that challenge is real so to speak right and it's really going back and when you see people talking about solution statements or you know specific implementations to drive it back to a problem statement and at the end what you'll end up with is most of the time some hybrid in between that most people aren't either side is gonna be comfortable with but over time they get more comfortable with it because it's a fear of the unknown once the unknown becomes less of an unknown then people kind of buy on board and continue to evolve and grow that but in the beginning yes it's very challenging but it's really about problem statements and acceptance criteria more than solution statements and expectations I guess so okay well thank you guys for coming I appreciate it