 Well hello everybody and welcome again to another OpenShift Commons briefing this time with the OpenShift Commons long-time members. TwistLock is going to talk about automated app defense, depending on your applications on OpenShift. And we have Michael Withrow with us from TwistLock who's been giving us talks before, so I'm looking forward to this one. And I'm going to let Michael take it away. We'll do live Q&A in the chat. If you have questions, please ask them and we'll try and answer them. And then we'll have live Q&A at the end, so without any further ado, Michael, please take it away. All right. Thank you, Diane. And I also just to let everybody know, so just to carry on, so my name is Michael Withrow. I'm the Director of Customer Success here at TwistLock. I've also got Jeff Little-John, who's our VP of Business Dev here at TwistLock as well to help you here answer any questions as we're kind of going through. And so like I said, thanks everybody for attending. Really the idea here is to walk everybody through TwistLock and what we do as a product, is we're going through. All right. And so as we start out, what is TwistLock? So really, as you look at it from a capability's point of view, we provide automated life cycle based security from beginning of the life cycle all the way through the end. So irregardless of, well, as you look at the entire Docker ecosystem, as we go across all the different industries, all the different geos, you see representation across the entire ecosystem. We have many customers running Jenkins, many customers running Bamboo, CircleCI, Drone, Bitbucket, TeamCity, all from a CI perspective. So we have many capabilities to integrate there. So think about it as automated integration of the build and not only alerting on the capabilities on build, but we'll be able to restrict those builds based off of vulnerability state, threat state, governance state, those kind of things like that. And as we move forward, obviously we get into the CD portion of the conversation. That image gets, that artifact gets moved upstream into a registry out for the deployment. Once again, there's a myriad of tools that exist from a registry perspective, you know, whether it's any one of the public clouds, you know, from Google or Amazon or AWS, any of the private ones, Docker, DTR, Artifactory, Nexus, Quay, you know, as you look from the OpenShift registry and those particular perspectives like that as you're getting through. And then of course, from a deployment perspective, as you're kind of going through, obviously this is very focused on OpenShift and actually the regard, you know, have an OpenShift cluster, you know, on-premises, multi-tenanted, whatever it might be, having the ability to go into deploying to there as you're kind of going through, which is really the premise of what we're going to talk about today as we're going through. And so skip through these as we kind of go through. Really as you look at it from a product perspective, we really focus off of a couple of key areas as you're going through. The base of the product and where we started out is with an access control mechanism. And there is some overlap with the OpenShift capabilities there. I will talk about that as we go a little bit forward from a natural point. But then as you move up the stack, you can see that we have compliance capabilities, really those governance, those industry governance settings, things CIS, think NIST, think PCI, HIPAA, SOC, those kind of things like that when I'm talking compliance. We have all the vulnerability data. Like I said, not just vulnerability, but also threat data as we're kind of moving up. And then as you keep layering on, then we bring in, you know, defense in depth, runtime defenses as well as we're kind of going through. So as you go from static content over to metadata, essentially introduce additional threat protection vectors against those different attack surfaces. And then rounding it out with cloud native firewalling capabilities that I'll talk through as we're going through. And so with that, I'll kind of get into the architecture and then I'll kind of break it down and I'll actually get into the product and show it through. And so how we do all of this, like I said, automatic is the key term as you're looking through from a capabilities point of view. At the rate of change that containers really bring into the marketplace, there's really no matter how many bodies you throw at it, there's really not enough manual processes that you can kind of use to get in front of this. So automatic is the key way. And how we do this is we are just a set of containerized technology ourselves as you're kind of going through. You can see the console here, which is really just the front end of the product. It's where you basically set in a policy where you look at the audit data. It's where the CVE data gets pumped into the environment. It's where you're building in all your alerting notifications, right? So as you're kind of going through, maybe you want to kick, maybe you have an existing Slack channel, or JIRA ticketing, or paid your duty, or something like that. You have all of your directory users coming in through Okta, or AD, or something like that. And then your native Splunk users, or SumoLogic, or anything on one of the cloud vendors as you're kind of going through, have an ability to do native SysLog integration as you're going through. And really as you look at that, that's the front end, a singular entity. We have some HA capabilities. Obviously the ability, we provide a YAML file, so you can deploy that right into OpenShift. And so OpenShift will manage the HA of the console as you're going through with a PV, and obviously a YAML file for the PV, and all that kind of stuff like that, which I'll talk about in a second as you kind of scale those things out. Then the point of enforcement is really downstream on all the different nodes. So depending on how you build out your OpenShift cluster, right? So most of the time on an OpenShift cluster, all the masters are not open for scheduling, right, as you're kind of going through. But all your downstream, infra, and app nodes across your different cluster, that's typically opened up for scheduling where the pods are going to actually get deployed. So what we're kind of talking about here is having the defenders any place where you're going to install, where you're going to be running images and containers, right, as you're going through. So depending on how your OpenShift cluster is set up, will kind of help you dictate from an architectural footprint where the defenders need to be laid down to provide the automated protections that I'll be talking through as we're going through this presentation. And as you look at it, think of it as a privileged, paired container that runs like a proxy in the environment, which I'll talk more a little bit through as we're going through. But that's really the first point. So that defender is going to be a point of presence on that particular node. As pods are deployed onto that node, it's going to build the baseline, build the state, get all the configuration data, and then basically now it's a lifecycle-based conversation. What's the drift off of that base? What's the drift off of that state as you're going through? So that's where the defender really becomes a critical part of the architecture to really define that configuration. And then as I alluded to before, we also have the ability to extend well down into the CI CD pipeline process, right? And so with plug-ins, we have an independent tool called a twist CLI. So depending on what you might be leveraging from a CI perspective, obviously in OpenShift, you're running Jenkins pipeline-based capabilities in Jenkins. That's a native capability for us, which I'll show as we kind of get through it as we're going a little bit deeper. And then your OpenShift registry have the ability to plug in your OpenShift registry. Depending on how your topology is, there's some OpenShift customers that have a nexus front-end, which is wide open to the internet. We have the ability to plug in there. So as the developers pull resources into there, we can start to build the state of the images in that nexus registry for the brought into the OpenShift registries inside the cluster. Whatever that topology might look like, we have a lot of different ways to plug into those, because we are agnostic for natural footprint. And so with that, what I'll do is now I'm going to get out of the presentation side and I will go through and really kind of get into the product as we're going through. All right, so we bring this over and maximize this out. And so as you look at it, the key things to really understand is that as you look at it, here's the front-end. And so as you look at those defenders I was talking about before, the defenders being installed across the environment is what's really starting to build out your baseline as you're going through. So I can see here's the baseline of my running containers. As I can see as I look through, I can see which ones are internet connected, what the inbound or what the ingress and egress is across my different containers, whether they're behind a firewall, what the port structure is, what the vulnerability state, all those particular things like that are. I see the overall compliance state, the overall vulnerability state of those resources as I'm going through. And so essentially as you look at it from our point of view, how we essentially do that is because by default, we really are built off of policy, which is all set onto alerting, which is really what builds that state as you're kind of going through all that stuff is defined as you're going through. And then as you look through, we make it really simple to go through. In your case, to go through from an OpenShift perspective to add in the registries. All going to be V2-based registries. You're going to put in your customer.ose.registry, whatever that might be, you're going to add that in there. And then whatever your credentials are in order to get into that registry, you're going to define it there as you're going through. And then so from that point, as images are built or stuffed in the registry, obviously we can go through and start giving the state of those images in that registry as you kind of see here as we're going through. And I'll talk through those a little bit deeper as we're going through. And the key there is that essentially the defenders, which are deployed across the environment and deployed across the OpenShift environment, have a set of binaries in there, which are there for the purposes of conducting a static image analysis, right? And so that's the key term when we're talking about images as they exist, as they move through the lifecycle as you're going through. And obviously as you get onto the Jenkins side, right? So what I have here is a real simple pipeline-based build to kind of show the configuration as we're going through. But the idea is that this is just a real simple build out, but it shows the flow. Is that somewhere, no matter how simple or complex that build process is, somewhere is an artifact of actually doing a Docker build, right? To be fair enough, it could be a Docker pull, but at the end of the day, there is an image artifact that we're looking for. And we are typically the next step to go through and scan that particular image as it's moving through the pipeline, right? So like I said, an on-build capability that we're going through. So now as you look at it, there's a couple of real key things that are really, really important as you're kind of going through. And you kind of see here a couple of different warns that we have here. The base behavior is obviously to warn, right? So depending on what your CI pipeline process looks like, what level of enforcement you're trying to roll through, we allow you to toggle that setting all the way up the line as you're going through. So for the vulnerability state, maybe I want to warn on high or critical only, right, as an example. Or maybe I'm worried about all. So I want to do low as an example or whatever that is. I can set that up. And so obviously if there's any low vulnerabilities in there, we're going to trip that entire build and kick that back to the developer to remediate all those low-based vulnerabilities. This allows me to get that state, know what's going on, so I can move it up. And depending on how I want to build that pipeline, we give you the flexibility to tune that to whatever you need. And what's really unique here is that we also do the same thing on the governance side, right? So obviously we're talking about governance from an image perspective because there's not any metadata, all those kind of things like that. So think CIS benchmarks, which I'll talk about in a second or a couple of minutes as we're going through. But really there's a base set of confines that really say, hey, look, on an image, best practices is running as roots, that profile, those kind of things like that. Really as a base layer, as I'm building that image, obviously at a static level, what governance settings would trip up if I'm trying to deploy this into a PCI environment or a SOC or a HIPAA or whatever it might be, type of environment downstream. So you have a lot of control for how you would affect that. We have some customers that block on CI, but allow on CD. We have some customers that allow on CI and block on CD as you're kind of going upstream, right, from that sort of perspective. The key thing to understand though is what we're really talking about is those same set of binaries that I was talking about in the defender, the Jenkins plug-in has those same set of binaries, right? So the experience is the same. And what we're really talking about from a capabilities point of view is that anywhere across the life cycle where an image can actually live, we have the ability to plug in from a binary level and perform a static analysis off of that image as we're going through. And so as you look at it, what we're really talking about when we talk about that static analysis is that as we're going through, we really look, like I said, depending on the state of the image, we really look at it from two different perspectives. This image on this particular build did have a couple of compliance. Like I said, it was running at root as an example just to kind of show a couple of different flavors in there, just a simple health check based violation that exists there. But obviously these can take many different forms, but just to kind of show as a base layer what this kind of looks like is what you'll see here. And then as we go across, like I said, the cool part about this is exactly what the developer experience is going to be like. All this stuff's defined here as we're kind of going through. And so as we're going through, as you look through it, now we have all the base vulnerabilities that exist here as we're going through. And so all the different details are going to be displayed back to developer. So if you warn that build or you fill that build off, the developer would know why that build was filled off. And what remediation steps need to be taken place in order to move that build upstream as you're going through. And so as you look at that a little bit deeper, obviously we have the ability to look at the images in the registry as well. And we have the ability to look at the images as they move from the registry and are deployed out into the environment as well. And now you have an image injected into a container that's running in a pod that's on a node inside of your OpenShift cluster, right? So what we're talking about is the ability to affect what images can actually get propagated in, but now a clean image has got deployed in your environment and now vulnerabilities exist, right? What kind of happens? How do we notify that there that's coming through? And so as you look at it, we break that process down a little bit deeper. What you're really looking at is as we get into here, I only click that off. Why is that not clicking? Sorry about that. There we go. I was just being a little slow, sorry about that, as we kind of go through. And so what we're really talking about is that from a static perspective, the first part of the conversation is that we're going through here and clicking on the layers, right? And so with the layers is that's really where we start out from a static analysis perspective is really going through and say, hey, look, this image has 23 layers. It's basically right out about a gig as I'm going through. So a couple of different problems from that perspective. I did hear a chat coming in there. So let me back up, and I did hear a chat. So make sure I count for the chat, okay. And so Diane, if it makes sense, yeah, what I'll say is just to catch up. As we're going through, if there's any questions, please just jump in and interject on the questions that is kind of going through. And I will say that right now my console is not running on the dashboard. But let me show you something. My dashboard is not running on OpenShift, but I do have an OpenShift node here and then a couple of things that we have. So I didn't have a chance to build it up before we got finished out. But you'll see here, let me back up and kind of talk through the actual deployment if that makes sense. Then I'll circle back to the actual vulnerability stuff that we're doing. Yeah, perfect. So if we catch up, right? So what we're talking about is that as we go through, the console is going to be the first thing that you're going to actually deploy as you're kind of going through. And so the first real requirement that we have is obviously a persistent volume. So you can kind of see here that we have this persistent volume.yaml file, right, as we're kind of going through. And so this is the first thing you're going to build as we're kind of going through, you can see here the name of the file. The label gets really important because we're talking about really binding the pod to this particular PV. And so we create a PVC, a persistent volume claim that binds and it uses that label as really how we bind it as we're going through. And then the path as we're going through. So depending on how your storage construct is or whatever it is, here we're using the host path as you're going through. Obviously the important part about this is by default, host path implementation is set to restricted. So there's a couple of different ways you can do that. We actually have the logic inside of our YAML file to actually build out an SCC to relax the host path implementation setup, which I'll show you in a second. But depending on this, you're going to set your host path of where the PV is actually going to be mounted. And then from natural perspective, then as you go through, so I'm going to do an OC get PV. And you can see, oh, I got to set my config file. So as I go through. So I want to do a pseudo SU from natural perspective. So I want to do OC get PV from natural perspective. And you can see here that we have a twist lock PV that's bound, right? From natural perspective. So I can do an OC describe PV twist lock from natural perspective. You can get further details about how that's actually set up, which all that was defined in the YAML file that was built there. And then once that's there, as you look through, the next thing we have is essentially the twist lock console.YAML. So as you look at this, twist lock console.YAML. Now, as we're going through, you'll see here a couple of different things. We have a config map, which is really the Docker file that's going to be built for the image, which is all the constructs for the twist lock console itself around the port structure, all that stuff like that. And as you can see in here, as we're going through a couple of the key things, depending on how the namespaces are constructed, do we need to build services and routes and all that kind of stuff like that? If you deploy the console and the defenders in the same namespace, you don't really need to think about services and ports and all that kind of stuff like that. But if you do, depending on what your business criteria is, do need to segment, we provide those services and port examples in the config file as you're going through. And then as I alluded as well, we actually built out an SCC for you, a security context constraint file named twist lock console. And you'll see that the first thing it does is it toggles the basically allow host, der, volume plugin to true. Because as the base layer inside of OpenShift, the restricted SCC is going to be kicked in, which has basically the volume plugin set to false, right? So essentially, there's a couple of different ways you can relax it. The least intrusive way is to create an SCC around that, or you can just modify restricted, which I wouldn't recommend as you're kind of going through. So that's essentially the reason we build this out as you're kind of going through. We don't run as privileged. We build those kind of things like that. The console is the least privileged container as you're kind of going through. So really the host path is really the only thing that you kind of worry about as you're setting that. And we do specify that inside of the ML file as you're going through. And so as you look at it as we're going through, now you see the claim, and that label becomes really important as you saw back in the PV. The PVC actually is, which allows that to actually be bound as we're going through. And so from that perspective, then you'll see the base port structure, if you need to do it there, and that's a good point as you're going through. And those are really the big things that you have as we're kind of going through. You'll see the data volume there. We do specify all of this stuff, depending on what the registry is. Typically on an OpenShift deployment, everybody is using their OpenShift registry. So the exercise is, is taking the image for the console, putting in the OpenShift registry, and then specifying the path for it here in the ML file as you're going through. And so as we go through, the idea there from actually the point is that now we can actually run the ML file as you're going through. I did run into a problem with that, but just kind of give you a basic example as we're going through. That's what I was working through some permission stuff on the back end. That's why I was really close to having it up and running. I apologize for that to the perspective. Here is basically my OpenShift clusters are kind of going through. And then on that particular perspective, I do have the console running off the ML file. I just had some syntax problems. I didn't quite get a chance to flush out before this particular briefing is going through. But that's the first thing as you're kind of going through to have that base set up and then get the console deployed. Obviously, now you have the ability to browse the console, which you would kind of see here, which this is just running on a Docker environment. But I do have a raw Kubernetes environment, which obviously OpenShift is built off of as well. The base is really the same. The TwistLock console is the TwistLock console, whether it's installed natively on the Docker daemon, or it's installed as a pod inside of a cluster as you're kind of going through. As you'll go through, it'll be served up on some network port. Availability is built out, and then you have it there. So to complete that circle as we're going through, once you have the console deployed, we have now the next thing is to build out the daemon set as you're going through. And so you can see here, we have a couple of different ways. So depending on what your network topology is, all that stuff like that, you'll build this out here, which really says, hey, look, which resource, what is the networking connectivity for how the defenders are going to connect back to the console as you're going through. So you would define all this out as you're going through. Typically, what I usually always tell customers to do is as you're going through, I want to do an OC get SVC, from that perspective. And then I'm going to grab that cluster IP from that to the respective. I'm going to grab that. I'm going to go in the twist slot console, and I'm going to add a row from that perspective, and then add that. And then now as I do my deployments of the daemon set, the first thing I'm going to do is I'm going to grab that cluster IP, go through OpenShift. I'm going to set the uning socket. I'm going to do Docker. And then, of course, I'm going to do whatever my registry is. And so customer.ose.redistry.com, colon 5000, as an example. And typically, from a natural point, if it's an OpenShift registry, that typically means that you need a secret to communicate with it. And so that's one of the things we define in the config file as well. Sorry about that. I closed that out. But having in the config file the secret, and so twist lock, dash console, or whatever it is, right? Have that secret there. And then this output there is the YAML file, right? For that. And then essentially now I have a daemon set.yaml. And then you kind of see here as an example. This is on my Kubernetes box as I go through just to show a couple different places because I didn't get that far in the implementation. But to show you, as we go through this YAML file, the construct is the same. This is based off of Kubernetes. But as you're going through, really the big thing that you account for and want to make sure is set is this WSS address, right? So that's essentially that path that I was talking about for the defender to be able to communicate back with the console as you deploy it downstream on your nodes. And then, of course, the defender is going to communicate bi-directionally over a TLS communication over port 8084. So this is essentially setting that daemon set. And then now you build the daemon set out. Now you let the OpenShift cluster manage the availability of your defenders across your nodes. So as you add in new nodes, take away nodes, the daemon set accounts for all that and automates that process as you're going through. Makes sense? Any other questions? Okay, good. I'll just monitor that periodically as we're going through. And so as you look at it, really what we're talking about is that we really give you three artifacts. An artifact for the persistent volume, a YAML file for the PV, and then a YAML file for the console, and a YAML file for the daemon set that allows you to integrate twist lock into your cluster and have it as a cluster managed entity as you're going through. Makes sense? Any questions for the group? I'll take that and I'll pause there. I know we're about halfway through. And just if there's any questions, please come off mute and hit me up directly. Yes. The one question I might have is on the twist lock website, where is the best place to find the documentation walking us through what you've just shown us to point? Perfect. Great segue. And so as you look through, as you're going through from our particular respective, right on our twist lock console, we have, let's see, learn more about this feature. So depending on where you're at in the UI, we have this page you click here and it's going to auto log you on and take you right to the corresponding document, right? And so there it is. Basically, any questions you have about setting up the daemon set deployment, you would walk to that document there as you're kind of going through. And so if you had questions about the console, I could go into here in a search. This is all search enabled field. And you can see all the different ways we have going through and installing the console as it's going through. To be completely fair, one thing that we are working on Diana for everybody in the community to be aware is that obviously you see all of our documentation today is very specific to Kubernetes. Obviously, that is a precursor to OpenShift because it's sitting on top of that. So one thing that we are doing is that essentially we right now we have inside of the documentation. We do have representation for OC create as an example as we're going through. So if you're on Kubernetes, we're on a Kube create, but if you're on OpenShift, we're on an OC create as we're going through. But we're going to be even more descriptive than that. We have actually an action item. It's actually something we're working on right now. We're going to build out a separate documentation. So if you're doing an OpenShift deployment, you would type in OpenShift on the search bar and essentially would pump out all the documents for the persistent volume for the console, for the daemon set for OpenShift. You're running on Kubernetes. You would type in Kubernetes and bring out the document. So right now it all predicates off Kubernetes, but we are segmenting that out here really, really soon in our next release. All right. From the documentation perspective. Thank you for that. No, good. So thanks for, I should have done that first. I apologize. I went right into how it was running instead of how you should deploy it. So that was on me as you kind of going through. But like I said, what I hopefully you take away from this is that we make it really, really easy to generate the ML files and execute the ML files to deploy the product. So most customers not having any precursory knowledge of the product can deploy this really, really seamlessly by integration with your DevOps teams, those kind of things like that as you're going through. All right. And so now kind of carrying it on, if there's no other questions about deployment setup, architectures are going through, now kind of spend the next half an hour kind of getting a little bit deeper into what we bring and what you've got twist lock deployed across your OpenShift cluster, what we provide from a topology perspective from a view perspective and from a restriction perspective as we're going through. All right. And so as you look at this a little bit deeper going back to where I was at before really what we're talking about when we do the static analysis is really breaking down the different layers that exist in the image for the purposes of generating a bill of materials. And so here we have the bill of materials for the particular image that exists which becomes the artifact based off of the Shaw as that image moves through the life cycle, right? And what's really unique about us as well is that you know, hey, we know developers are building jar files, tar files, war files, zip files, NPM files and injecting them in images. We have the ability to crack those open looking for binary executables within. We see that we'll actually link them here as we're going through. Notice we have some licensing components, not enforcement, but alerting as we're going through that stuff's defined all for the purposes of knowing what to link vulnerability data to. And this is what gets into the life cycle portion of the conversation. Now that we have that bill of materials, whether we develop that bill of materials in the CI, in CD or out and running, someone skipped the entire CI process, does a Docker pull on a node, right? Right from Docker IO is an example. We can generate that bill of materials and do all the corresponding alerting, notification, all that kind of stuff like that that exists. But the base as we go through is now that we have that bill of materials, now we can start basically linking in vulnerability data as we're going through. And so as you look at it, the really cool part about it is that we have a SaaS service. And really when you look at the SaaS service, it's a real-time intelligent CVE streaming service which does normalization of content over 30 different providers, right? So as you look at it from your perspective, you know, you're running RHEL or Atomic and you're deploying RHEL-based images in your environment and all that's propagated through. So all those feeds are going to be pulled in and then we actually are purpose-billed to go through. Because when I say we're 30 different providers, you know, one thing that we do that's unique is we pull from NIST the NVD database. We also pull from the RHEL oval feed. We pull from all the different distro oval feeds. We pull all the different language oval feeds. We pay for certain vulnerability data. We have partnerships with ProofPoint to give us an IP and malware content. We have a partnership with a zero-day malware company called Exodus which gives us our zero-day content as we're going through. And so now what we do is we pump machine learning into here. And so essentially what we're talking about is making intelligent decisions about which source, from which vendor we want to port in for this particular package in this particular image at this point of the life cycle as you're going through, right? As things are coming, because CVs get posted all the time in real time or near real time, depending on the SLAs of the oval feed, those kind of things like that, that's why I want to be very descriptive when I say whether it's real time or near real time as we're going through. But now that you have that data as we're going through, like I said, all this information we pumped in, we're going through and sourcing in the most accurate information that we can based off of that feed data that's coming in. So we're not pulling from one or two sources and doing a raw dump. We're actually intelligent in looking through that feed data and figuring out which is the most accurate source to pull in. You can see here, I'm running a CentOS-based image. There's the SHA, all stuff like that. You know, when I pull off this particular package, that particular package is right off of NIST-NVD, right? As I'm kind of going through. But as I go downstream and other packages, that conversation is different, right? You can see here, this is actually off the red hat feed, right? Because for that particular package, the most accurate source was the red hat feed as we're going through. And so depending on the package, depending on the information, depending on which packaging we're going to be sourcing in, you can kind of see we went over to SNCC for this particular information to pull in, particular information about this particular package as we're going through. So very sourced, very purpose-built, really focused on remediating the false positives, all those kind of things like that as we're going through. And so as you look at it, really the idea and the premise and the purpose here is to build the baseline. Good, bad or indifferent, you have the baseline of your environment, right? And that's essentially what this dashboard has given you, is that baseline as we're going through. But we don't stop there as you're going through. We give the baseline of your host layer as well. So you're running those well-based hosts. We're going to give you the vulnerability and the compliance state of those, essentially the same way we do with the images. We're going to basically build out that bill of materials and now we know what vulnerability and compliance data to link in there as you're going through. And then so what we do next from that perspective is we have another layer of analytics, right? And so, hey, thanks Twistlock. You told me I have a thousand vulnerabilities, right? What do I even do with that data as an example? So as you look at our explorers, really the explorers are really, like I said, pumping in analytics to give you some really direction for remediation. And so as you look here, here's the particular CVE. Maybe it's a low remedium or a critical CVE. But what's more important is where is that CVE actually propagated? And so here's the configuration data for the propagation of that CVE. Here I can see here's the image, here's the container, here's the host. Image, container, host. This is just an image, not actually deployed in a container anywhere. But as I look across, these containers are all in different states. This one is actually exposed on the internet, running a high-sex CVE. This actually represents the most risk to my organization. That's why it's ranked number one. This one I probably want to remediate first as I'm going through. And then we give you all of that data as we go through so they can propagate that out. All that stuff is going to be deployed as I'm kind of going across. This particular container has a couple of different infractions on it. So if I remediate them, this is the one that represents the most risk. And this is a bubbling up type of topology. I remediate one, two becomes one, 11 becomes 10 as we're going through until you've remediated all those risks that exist out there. You can see the CVEs change because it's the combination of the two entities which dictate that risk, right? The CVE and the containers actually deployed in dictates that risk factor as you're kind of going through. But as I talked about in Jenkins as well, as this gets in the remediation portion of the conversation, but I told you before, we also affect the CI CD pipeline process. So as I talked about in Jenkins, we have the ability not only to warn, but to affect and block images that violate the vulnerability threshold or the governance threshold. That's on the CI portion of the conversation. When you get into the CD portion of the conversation, we provide blocking mechanisms here as well as you look through. So maybe on your infrastructure nodes or your app tier of your OpenShift cluster, what was built, right? So maybe a developer builds a clean image and that image gets posted to your OpenShift registry, but now that image has been sitting in the registry for a day or a week or two weeks and now it's got vulnerabilities on there, right? Maybe you missed the alerted notification and OpenShift picks it up and tries to do a deployment off of that, right? So now you just basically have, you know, cart block through orchestration deploying basically a vulnerable image as a pot in your container. What we're basically staying here is I can actually block that topology as I'm kind of going through. You can set the threshold as I'm kind of going about. I have a lot of granularity for what type of thresholds I want to define across what entities and then I can get very prescriptive for where I deploy that. What's very common when you look at OpenShift apologies is multiple customers have multiple swim lanes for their OpenShift clusters. Maybe, you know, one, two, three, or maybe five downstream pre-prod based clusters and then maybe a single or multiple pre-production clusters because maybe one cluster is PCI and other cluster is HIPAA or whatever that setup might be. We have the ability to segment policy across those resources as we're going through. So I can go through here and basically add in all my nodes. Maybe this is a five node cluster, right? I could add them in individually, node one, node two, whatever it might be as I'm kind of going through, but that's really where the labels come in because everything in pre-prod is going to be, you know, customer.pre. You know, whatever it is, and I'm going to do a star and now I've just sucked in the entire cluster into that particular policy and I've basically defined a posture for that cluster, right? So as I'm going through and what's very, very common is customers use the policy setup to basically tear out their environment. So as I'm going through, so I can define, hey look, from a vulnerability perspective, upstream or downstream, I should say, I want to be notified all the way through. I just want to be alerted to vulnerability states all the way through dev, all the way through pre-prod. But when I get to production, there should be no surprises, there should be no drift. I want to block vulnerabilities from actually being run there in production as I'm kind of going through. And that's a normal type of setup that customers actually have deployed with our product as they're going through. All right, make sure there's not any other questions as I'm kind of going through. But that gets into the other part of the product to match the perspective. We do a native dump into Syslog. So from an OpenShift perspective, OpenShift's writing into the Syslog feed. We do a write-in facility one of Syslog as well. So if you're already doing dumps from a Syslog perspective into a SIM off of your OpenShift setup, we can tie natively into that and then give you the audit data off of the OpenShift nodes as well as the pods and containers and images up above as well as you're going through. Maybe you have a Slack channel or JIRA for a ticketing perspective, or you have SMTP with the ability to integrate there and then segment not only what avenue the data is getting pumped out, but who do you want to send that data to? What type of data do you want to send them? And at what frequency do you want to send that data? With me so far? That's great. Actually, that was the one question. Where's the alerting in all of this stuff? Yeah. And that was where I wanted to get to this point and kind of dump through, is give you a basic example of vulnerabilities and then how do you operationalize this? And that's really the key term as you look through it, is what do you do with this data? It's one thing for us to give you all the data, but how do you actually use it? What do you actually do with it? And that's the premise of how I wanted to really tailor this. So as you look at it, this product is not designed nor is it needed for you to live in here all day every day as you're kind of going through it. So that's what the alerting, we're a REST API based product. We have all the alerting notification through the defenders and through the configuration data, through the policy. We're going to build a lot of information about your particular environment. And now you kind of build through twist locks say, how do I want to distribute that information? As a developer builds an image and that image has vulnerabilities, I want to be notified of that. If somebody posts an image that has vulnerabilities, I want to be notified about that. I have a production image that is clean or a production pod that is clean, but now a zero-day just hit it. I want to be notified, right? All that stuff like that is really how we're built out as you're going through. And so you set your email chain and now a zero-day hits, boom, boom, X, Y, and Z people get notified or a developer posts an image that has these vulnerabilities, boom, X, Y, and Z people get notified as you're going through. You don't have to live in the console and then waiting for an alert, all that kind of stuff like that. Off of the alert, we build the notification, cascade it downstream. Make sense? All right. Yeah, that's great. All right, we do the same things on the governance side, right? Just like I talked about as well, what's really cool here as you go through, we as a company, Twistlock as a company, have really made a couple of key contributions to the ecosystem. One around CIS, not only CIS for Docker, but CIS for Kubernetes, right? So those are native to the product. Here's all the Docker configs and here's all the Kubernetes configs. So as you deploy your OpenShift cluster, we're going to help you maintain the baseline of that topology, right, as you're kind of going through, all from a security point of view, as you're kind of going through. So you can see here about 300, almost 400 plus checks are here out of the box as you're kind of going through. And we also have the ability to extend, right? So whether it's through OpenScap, Red Hat has an OpenScap capability. As you kind of see here, here's an OpenScap XML that we kind of built out and a lot of customers are using this to get very application centric. Think, I want to build. This is really saying, hey look, I don't want to deploy privileged containers. I don't want running as root. I don't want the ability to SSH, those kind of things like that. But more grain than they would say, hey look, I want to make sure my developers are building an application on this Linux family with this version of the application, with this port structure exposed, with this permission set on this user and this permission set on this file system as an example. So developers are starting to leverage us as well and how they're doing that is through OpenScap. And so essentially we have native capabilities in here to go through through SCAP and upload data streams as I kind of go through. So I can go through here, add in the data stream and maybe I want to pick up as I'm kind of going through maybe the sample the sample DSS streams that exist out there. I can build those out and add those in and enforce those as well as we're kind of going through. That was the reason I couldn't delete it is because essentially it was being used as we're kind of going through. But as you look at it simply enough really what we do is allow you to not only alert on the posture but block the deployment of images that into pods that violate these postures. So I can define what I want to do if a host has these settings, if the master is deployed as an RPM or as a container obviously which settings would apply would be important but we've got it covered from both perspectives as we're going through and saying here and now we can go through and say hey look this is what my master has to look like when I deploy it this is what my API server my worker nodes, what they look like what communication structure they have all that kind of stuff like that is defined. If I'm doing federation which is upstream from that perspective not a lot of customers doing federation yet but we're starting to see an uptick of federation I'm pretty sure that OpenShift supports federation Diane you have to keep me honest on that one from a massive perspective but that's all defined there as you're kind of going through. Alright so pivoting off of that as you look at vulnerabilities and compliance really just think that these really those two constructs are really there to define your CI CD portion of the product right so really handling your static base content. So now as we move upstream that's where access control comes in run time and firewalls. I won't talk about access control today because selfish selfish a plug there we actually wrote the the basically an auth z plug into the Docker daemon itself on the open source side it's actually what OpenShift leverages to provide the access control mechanisms there so essentially it's basically it's a feature parity there as you look between what how we do access control and how OpenShift does access control because they're using the same engine underneath so to speak from a plugin perspective but really starting out as we kind of go through now we've gone through and we've secured our CI CD pipeline process we've got refined how images actually get built how images get deployed now the only thing we're really concerned with is drift off of that setup as we're kind of going through which is really as a base how run time is built out how we do this is is automatically through modeling right as an example as we're going through and so what we really have as you look at it the defender which is sitting on all your OpenShift nodes is leveraging auth Z as plumbing down into the kernel right and essentially we have some sensors that we drop down in the kernel offer from actual perspective so we can is essentially record and listen to transaction that traverse those sensors right and so when you get into orchestration and restriction of orchestration and someone bypasses think someone elevates out of the orchestration engine tries to do something through that conduit is really how we restrict, alert, notify all those particular things like that if you know obviously this is the 200 level view is those things like that but if you want to get a little bit deeper you can reach out to us directly that's where you can reach out to and we can you know schedule a further on deep dive of match the perspective it really talk through my binary level how this really works but the base that I want you to walk around understand is that at a very native level we integrate with OpenShift we integrate with the daemon itself so as OpenShift tries to do a deployment depending on how you set your CICD pipeline process we have the ability to set a threshold at your discretion as you're going through and now as you look at it right once a container is deployed think of us as going into record mode all these sensors a process sensor a network sensor a file system sensor a system call sensor goes into record mode and we're looking at two base things static is first and foremost obviously we got that from the Docker file but behavioral is all that metadata now that the containers up and running what is it actually doing and so at a base level here's what processes you're allowed to vote invoke here's what network calls you're allowed to make here's what file system transactions and system call transactions you're allowed to make we do the same thing on the host layer provide that runtime protection on the host layer right there's the child processes all that stuff like that as I go through notice the behavioral content that's defined there once we have that once we have that model essentially everything that happens off the back end of that we generate an audit because the premise of the models is to generate to grab the operational intent so obviously malicious is part of that right but it's not the only thing we look at this containers purpose in life is to run Jenkins if it's all of a sudden run Tomcat entities or Mongo entities or something like that that is outside of the intent we are going to let you know but these can get rather noisy as you're kind of going through right from that perspective and so that's essentially what incident explorer is another set of analytics from that to the respective we have a set of we have a set of algorithms that comb through those audit trails looking for trends and anomalies right of a malicious intent and pulling them out as incidents right so you can see an example kill chain here here's the entire forensic trail of what actually transpired see so someone came in made a system call which did a file system right it's et cetera et cetera et cetera you can see there's 16 steps in this kill chain but as I go down you know here's a more complex one this is 92 step kill chain as we're going through all this is defined through file system rights network calls all that stuff like that is going to be defined here as we're going through the key thing to really think about here is essentially any one of these actions by themselves don't necessarily dictate a kill chain or a trend it's the it's the collective of them together is what actually builds that out then we'll actually look north and south of that event look for other things in that field and that's how we build you know kill change botnet attacks Trojan horse attacks X SQL injection attacks you know XSS based attacks DDoS you know all that stuff like that is kind of things that we're looking through and so obviously the base behavior is that we are alerting to this behavior but we really bring in a base set of capabilities that to get more granular and how you react to that as well and so what we have is notice this prevent button and so now that we've defined that white list right if anything happens outside of that white list do you want to be alerted and notified of that do you want to block it or do you want to prevent it right so obviously block means to kill the container kill the pod and obviously from an open shift perspective we kill a pod open shifts going to try to spin that back up in blocking we actually kill it and put it into a forensic state to prevent that redeploy from an open shift perspective but so that's obviously really really intrusive what's more impactful and more useful is really the prevent button we know what process it was supposed to invoke if something gets invoked on the container that's not on this list we want to block that process we want to block that file system right I want to block that network packet as we're going through and I know we have about six minutes left to all close it out with the firewalls and then open it up for Q&A in the last couple of minutes so the key thing I want to leave you with is that on the runtime side we really give you in-depth data about what's actually transpired outside from a modeling perspective what changed on your model and then give you meaningful ways about how you react to that right do I want to do alerting a notification or do I want to actually block it right so as you look here you know here's our WAF as an example we're building out more additional capabilities as we're going through I have a real simple rule here to say hey look I want to block the ability for anybody to basically talk to this web page via a browser right so all or nothing kind of scenarios I'm going through so I can set this to blocks real quick real quick if I go here and now we just kicked in real dynamically and blocked that particular container from going through all the while as I go through you can see here a Docker PS if I could type right the Jenkins container never went down right as I'm kind of going through I'm on the wrong box that's why I got to go through here we go so I can do a Docker PS and you can see there my Jenkins container is up and running as I'm kind of going through it's been up for six days right so I can simply go over here set this back and actually set that to alert go through and then refresh and then that now I'll let the package to diverse to come through right so we never actually killed the container we just blocked those packets so as you look here we have the ability to do that from a layer 7 perspective as well as a layer 3 perspective what's really cool here as we look through on the layer 3 perspective we know through modeling from a static and a behavioral perspective what the network transactions are right whether it's a Docker compose you have certain pods are supposed to communicate with each other we know all that through the modeling right now do you want to prevent any drift off of that or do you want to alert those kind of things like that so now from a layer 3 perspective now all of a sudden 8084 is only allowed inbound but if all of a sudden it's going to make a port 80 call back what do we want to do with that we want to be alert to that or do we want to prevent that that packet from actually traversing across all right so I know I can go a lot deeper from that perspective I went right up to the top of that I'll kind of open up for Q&A on the last couple of minutes well I think you've done a really good job doing this if you could share your contact information do you have a slide there so that people are trying to get a hold of you yeah so what I'll say is it's Michael at twist lock .com and Jeff at twist lock .com why don't you go to the twist lock and maybe bring up that kubernetes deployment page oh come on on the documentation from that perspective if someone's watching that yeah that's probably a really good place for folks to start as you're going through yep absolutely that would be where I would ask people to come and find out how to get started on all this and install everything I think you did a pretty amazing job covering off almost every aspect of this so there aren't any questions we'll give people a few more minutes to ask any questions they might have but as soon as you get your OpenShift specific documentation done let me know and I'm going to post this video up on blog.OpenShift.com as I do OpenShift Commons briefings and I will add in the link to that document here as well into the one that I'm putting up with this video too but you can always update it again this is really great stuff I'm so glad to see the alerting because it is it's overwhelming how much information there is absolutely I don't have a way to segment it out and send it off to the appropriate people or to the appropriate channel it's great it's visual it's wonderful but in order to operationalize it you really need that alerting capability built in as well so it's much appreciated by folks like myself and as always when you come out with a new release or if you have a customer or someone who's using it in an interesting way or has a great case study let me know in the comments below if you have any questions please feel free to contact us and we'll be back on another OpenShift Commons briefing sometime in the not too distant future and I hope you'll be joining us down in Austin at KubeCon the day before we're going to be doing the OpenShift Commons gathering again and there will be lots of people talking about security there and all of this good stuff as well well I can't imagine not being at a Kubernetes related event so before we hear that alright thank you very much for this and I'm going to pause the recording and get afterwards