 My name is John Morello, I'm the CTO for TwistLock. How many of you guys have heard of TwistLock before? Well, that's awesome. This warms my heart. This is not a product pitch for TwistLock. We'll definitely talk about a lot of things that we do, but this is not like a deep dive on TwistLock. Of course, we would be more than happy to give you a deep dive on TwistLock. You can go to booth number, I don't remember now, G something. It's in there. There's not that many. You'll find it at 28, maybe. If Jess is here, she could tell me. 28, G28. But what we're going to talk about today really is the way that containers really transform the way that you can do security and the degree of automation that you can provide to securing applications in ways that you really haven't been able to do on historical architectural patterns. And when I mean historical, if I say legacy, or historical, or something like that during the talk, what I'm really referring to is any scenario in which you're deploying your application into a virtual machine or a server in which the application and the operating system are intermingled. Like the traditional way you go and you run apt install, whatever, or double-click in an SI file, or something like that. And the application deploys within the operating system. And there's really no way to disentangle it from each other. Containers, obviously, are quite different in that regard. And we'll talk a lot about what that enables. But that's really the shift that we're looking at here. So a lot of you guys probably are coming at this from a DevOps background, a developer background, and so forth, which is great. I've done security my entire career, basically. So I was a CISO at a Fortune 500 before we started Twistlock. I've done that pretty much my whole life. And so looking at this from a security standpoint, the perspective I want you to get from it is how fundamental the change containers enable us to do to secure applications in a way that's much more autonomous and much more scalable than what you've been able to do in the future, in the past, rather. So that's really what the focus of the talk is today. Just to set a little bit of context for stuff, there's two trends that you guys are already really well familiar with. I'm not going to spend a lot of time on this. But two trends that are really creating this inflection point where people almost have to choose a different way of doing security than they have in the past. Everybody's heard that Andreessen quote about software eating the world and every company becoming a software company. I can't tell you how many times I've spoken to customers or I've read some press article. And it's nauseating to hear everybody say, we're a software company that flies airplanes or we're a coffee company that happens to make software. Everybody wants to be a software company. People do that to different degrees of effectiveness, basically. But the reality is that I think not just in a B to C scenario, but B to B scenarios as well, the expectation is that you're going to have a good experience interacting with whatever entity you're doing business with electronically. And you're going to do that through software. And so even for old school manufacturing companies that you may not think of as leading edge organizations, we have a lot of those in our customer base because they realize that that's something that is a competitive differentiator for them. And so what that's creating is this impetus to really push these organizations to do a lot more agile development, deliver a lot more software more rapidly than they have in the past. But in many cases, those organizations are completely ill-equipped to deal with that from a security standpoint. They come from a security world that was all about, I've got a perimeter firewall. I've got my QALIS or whatever. And I scan every 30 days. And I give you a report once a month. And that's awesome that you can see within 30 days how vulnerable you are. Obviously, if you're finding out three weeks after there's some O day in your front end application, you probably are already owned at that point. So that inflection point that's pushing everybody there as part of it, the other side of it though is, obviously the attackers are able to take advantage of all these tools of mass innovation that the good guys are too, though. In the digital domain, the advantages are almost entirely with the attacker. And the advantages are everything that the defenders have as an ability to help themselves in terms of cloud services and automation and so forth. Attackers have just the same degree of ability to use those. So the attacks that you see out there and the way that people are attacking applications is much more scalable than it has been the past. I mean, you can go to Shodan, for example, and you can create like a query to look for a vulnerable component someplace. And you can sweep the entire internet and you only pay for like the number of results that you get back that's above 10 million, right? Like you can get the first 10 million victims for free. And so you're dealing with an entirely different set of like a domain space that you have to deal with because the attacks are so much easier for a bad guy to scale out and run on a global basis at virtually no cost to them. So those two things create this real inflection point where you have to have a different way of doing security. And our fundamental belief is that containers help you improve security. Does that mean that if you have a bad application that's vulnerable and you put it into a container that it becomes magically secure? Of course it does. Of course it does, absolutely. That's what I'm here to tell you. Containerize everything and it'll be secure. No, but I mean, in here, I'm sure nobody believes that. Of course, there are many people out there in the public at large that do sort of believe that. The same way that they believe that moving to the cloud will make them secure. The same way that VMware sold them that putting stuff in virtual machines was gonna make them more secure. It gives you the capabilities, right? It gives you the tooling. You can use that tooling or you cannot use that tooling, but that's what we're gonna talk about today is if you use it in a wise way, what are the advantages that you're able to get from it? And I presented this same idea a number of times over the past several years, and there was a time two years ago where the debate basically was like, are containers secure, right? That used to be a big thing. If you think back to 2015, people saying like, is Docker secure and all that stuff or containers are secure as VMs? The reality is they're a tool, right? And that tool, if you use it well, can give you certain advantages, and that's what this talk is about is what those advantages are and how you're actually able to leverage them. So if you think about the old role model of security where most organizations either are today or are trying to get away from, it's what you see here. It's basically something that's very manual where you as a developer, before you ship your application, you're gonna sit down with like a human being, like a person, like you actually have to talk to them. You can't even do it through Slack in a lot of times. Like you actually have to like sit with a human being, which is so old, right? And you have to describe to them how does the application work, right? Like my app listens on this port and it runs this process and it talks to this database and this database is at a particular IP address. I'm just assume it's hard coded in my app and so it's never gonna change because virtual machines don't really change for the most part for people. And then if your security organization is like really on top of their shit, they're gonna go and go to like their firewall and configure one set of rules and you're gonna IDS and configure a different set of rules. Maybe they go to some sort of like anti malware tool and do something else. How many security teams ever really do that though? Very few, right? Because it's really hard to do that over time. And even if they do that on day one, on day two or day 10, or whenever you ship the next version of the application or you scale that app out now and it's talking to multiple different shards of a database or whatever, those rules are broken, right? So that notion of like security rule rod is a really prevalent real thing that you deal with in an organization because the notion of statically configuring everything that you have basically puts you into this model where the thing that you're trying to protect can never change, otherwise it's never gonna be protected the way that it was. And so what organizations ultimately end up doing is going to this that M&M model of security, right? Where it's hard on the outside and soft and chewy on the inside, it's just a perimeter, right? All they have is a firewall on the outside and you basically can just kind of hope for the best for everything else. And if you think about a model for these applications like I was talking about earlier that your airline and your bank and the people that you buy coffee from and all that other stuff that they're putting out there that have your credit card information and your personal identity information, all that, hopefully they're being wiser about protecting things and not just simply relying on the fact that I have a firewall out there and the firewall only allows traffic to TCP 443 and so my app is secure now, right? Like there's a lot of that mentality that's still out there and I think hopefully people are beginning to realize or many people do realize that that alone is not really a very effective way to do it. The protection has got to be much more application-centric and it's got to be much closer to the app itself. So in addition to this notion of like old world security, you have that other forcing function I was talking about earlier where all these organizations are trying to use DevOps, they're trying to be agile, they're trying to push software much more rapidly and then you kind of come to the situation where you have your security cake, you can either have it or you can eat it but you can't have both, right? You can either have your DevOps, you can have your agile stuff or you can have your old world way of doing security, you can't do the two together, they're fundamentally incompatible. If you're shipping a new version of your application every day, every week, every hour, whatever, you're using tools like Kubernetes or something to scale that application out, there's no way that you're gonna have a manual process that's gonna keep up with the degree of change and the number of entities and the ephemeral nature of those entities quickly enough to keep them really secure and so the choice becomes which one do you want, right? Do you wanna maintain this old way of securing the app or do you wanna take advantage of all these things that your business is pushing you to do because it gives you some sort of competitive advantage in whatever marketplace that you exist in and that's really what's driving us to this new model of having to have a different way of doing security because that choice is something that most organizations don't want, right? They wanna have security but they don't wanna be able to give up the, they don't wanna have to give up the kinds of capabilities that are interesting to them from a DevOps standpoint. And so containers give us this opportunity to do things differently and there's a lot of things that containers provide around isolation and being able to run with a very specific SEPCOM profile and things like that that are really cool and powerful. That's not really what we're talking about here. What I wanna talk to you about now though is the notion of how containers and the nature of containers enables you to apply software to learn what is the known good state to automatically create rules to enforce that, to model that known good state and then to be able to use that model to enforce over time compliance with that known good state. So if you go back to that old world security model that I showed you before, if I'm building that application and it's something that runs on an Apache container or whatever, in the past I would have to go in again, tell that security team like it runs HDTPD and it listens on this port and talks this database and so forth. What containers enable us to do is to learn that stuff dynamically such that instead of you having to tell a security tool or a security person how your application works, software is able to learn that for you automatically and the reason for that is there's three fundamental characteristics of containers that really come into play. The first one is the fact that containers are really minimal. Think about the scenario where you have an application that you've built and you've designed to run as microservices and you've designed to ship as containers and in images. That application is much smaller. The things that the entities that you're dealing with is much smaller and much more focused than that same application would be if you've got a big VMDK file. Again, getting back to just some simple hello world type thing that I built in Apache, if I gave you a VMDK of that file, you've got the full OS and all the other stuff that goes along with it and it's huge and it wasn't really designed to be very visible in the first place. You're talking kind of like at minimal if you've done a really good job of optimizing it. I mean, it's at least several gigs that you're talking about. You do something like that in a container image and it might just be a few megs, a couple of tens of megs maybe at most to do that. And so you're dealing with a much smaller degree of stuff that you have to deal with which enables you to have the ability to do this at scale. Secondly, containers or images, I guess you could say, are declarative by default and the by default part is a really important qualifier for that because there has historically been a number of different tools and approaches you could use as a developer to create some kind of security manifest type thing for your application. This has existed for many years and many different shapes and forms. When I was at Microsoft, we had stuff like this and what always happens is when you create this opportunity for a developer to give you a manifest, what does every developer do? They say, I need access to star. And so then suddenly the manifest is basically useless to you. Declareative for containers though means that we can look at that container and we know the container originates from some image and we know that image is built from a Docker file and we can inspect that Docker file and we can learn how it's assembled, what's being built in it, what are the individual steps that go to assembly that final end result that you're actually gonna run but most importantly, that declarative capability is something that is just inherent to the technology. I as a developer don't have to do any additional work to make that happen. I just simply build my application, build my image as I normally would and I'm automatically gonna have that declarative kind of manifest that goes along with it that again then we can read from a software standpoint to understand things. The third one is predictability or you can even call it immutability although most people don't truly run containers in an immutable fashion but you could kind of use that as a synonym for it at least. What we mean by predictable is that the container unlike a virtual machine at least should do the same thing from start to finish. If you have a virtual machine you're gonna deploy that virtual machine one day and over the course of time people are gonna log into it to debug things or they're gonna log into it and app get update and whatever else people do and that thing that it was doing on day one is probably not gonna be exactly the same thing that it's doing on day 20 or day 200 or whatever it may be down the line. And so containers are a lot different because the whole notion of how you update and revision and manage that fleet of applications and the things that run them is fundamentally different with containers, right? With a container you're gonna basically say when that application needs to be updated or changed I'm gonna destroy whatever's out there I'm gonna have a new image I'm gonna reprevision that image and that's gonna be the new thing but the thing that's being deployed that entity is the same from development all the way through production when it needs to be updated it's just destroyed and replaced rather than updated in the field. And that's a really critical differentiator from a security standpoint because it means that we can have a much more a high degree of reliance that whatever was deployed initially is the same as what it should be 10 days later or 100 days later whatever it is the time interval that you're looking at. You're not looking at something that's going to change over time you're looking at something that's going to be or at least should be fairly static over time. Now I caveat that by saying I'm fully aware and know this from lots of first hand pain dealing with customers that not everybody runs every container like that today. Over time though, if you're using this platform the right way and you're using the tooling to the best of its abilities more and more of your applications are going to look like that. And so over time you're gonna have less work that you have to do to make sure that the security model is able to stay in sync with the actual footprint of the application. So those three characteristics enable that different way of doing security. And what that means what that different way of security is is basically being able to apply some levels of both static and behavioral analysis but both of which are completely through software driven machine learning providing that analysis to create what we call a model for the application. So again that scenario where you've got an application that's going to run on Apache some Hello World app instead of me having to tell the security team how it works or somebody from security team having to go and sit down and like put their application to some kind of training mode and observe it and like commit rules and so forth. Every time you deploy your application we can create a model for that application automatically through a combination of inspecting that Docker file and the images that were used to compose the application because that declarative characteristic we can do so through behavioral analysis because of that predictability because we know what it should be doing over time and then we can create a model that describes what the application does across multiple dimensions. Process activity, network activity, file system, system calls that describes in a very comprehensive way and can be created without any kind of human interaction. And then what we're doing as a security company is we've got this predictive model we can also add into that the threat intelligence that you're already familiar with. So it's not just simply saying like we have this known good allow list model that says everything that's normal to run there we can supplement that and provide additional layers of defense and depth by being able to say like is this entity now trying to go and talk to some tour entry note on the internet? Is it talking to or trying to look up a DNS namespace that's associated with a botnet herder? Is it downloading malware? So this doesn't throw away all the things that you've historically done from a security standpoint it gets you an additional layer of protective capabilities by using that as a way to supplement what the model predicts as well. And then at the end of that that's the automated defense. That's the ability to just simply deploy your application have the model automatically created and then to be able to protect the application at scale. So what does a model look like? I promise you guys this was not gonna be a twist lock product pitch and it's not but this is an example of like how we've implemented and certainly theoretically anybody could build something similar to this but on the left hand side you can see what the UI looks like. It's effectively you would go in there and you could see here's a list of all the binaries that are supposed to run. Here's the checksum that are associated with those binaries. From a networking standpoint here's the sockets that it listens on. Here are the particular checksums of every binary that should be bound to every socket within that container. Like that alone is a really powerful thing. Historically doing some kind of mapping like that was really hard to be able to say like hey this virtual machine should only listen on port 80 but can you guarantee that it's the explicit binary that should be listening that's bound to that socket on 80 that it's not some malware or somebody didn't take it over and they have a netcat listener that just happens to be squatting on the same socket. You know the file system activity like where does it read from? What file system pass does it write to? System calls, what we do with system calls is to enhance what Docker D itself is doing. You know Docker has this default setcom profile that it attaches with everything. Because again we know what applications are inside of your images. We can dynamically source to you through what we call our intelligence stream which is the way we deliver vulnerability and threat data to you. We can dynamically source custom setcom profiles for the individual applications that you use. Based on a library of research that we do centrally we can say here's a setcom profile that's even more restrictive for Apache. That's more restrictive for NGINX or Mongo or whatever. And because we know what active running inside of that container we can automatically pair that up with the right profile and dynamically associate that every time that application starts up across any node in your cluster without you having to do anything. The right hand side is where you actually can see what does this look like from a JSON standpoint. I mean everything that we do in Twistlock and I think any sort of modern tool you wanna make available through an API, something that's a REST based API. And this makes it really easy and actually for me for the purposes of this talk this was really the only way I could show this without some heinous thing of like taking five different screenshots of the same UI. So it's kind of a cheat for me but basically you can see the way that the model is described here. You know the components there showing you like what have we learned about like behavioral analysis in terms of the processes that would run, the ND5s of those processes, the network ports that are listening and what applications are bound to those, whatever sort of outbound connections that particular container might make to other containers in the environment. All that stuff is described in the model. One other thing I want you to notice both on both sides here too is that model is correlated back to the image ID. The immutable identity of that image which means that if you have a cluster where you've got 500 nodes or 1000 nodes or whatever that's running out there we don't have to create that model on every single one of those nodes. The first time we see that that image launches anywhere inside the cluster we can create the model for it. We can share that model through our software to all the other nodes that are out there that we're protecting and thus as soon as the one of the nodes has learned that image every other node can inherit that same protection and use that same model that's there. And so this capability enables you a degree of scale and again, automation that you historically haven't been able to do. If you were building that application and shipping that same basic hello world app as a virtual machine, it's much more difficult for me to be able to say, I want this to be protected everywhere automatically and the next time that I update this thing tomorrow or an hour from now for that matter I want the model to reflect that and I want to have a different slightly tweaked security model that says in my new version it's a different MD5 of Apache because I've got a patched version of HDPD that I'm running. With this, because that's calculated every time you launch an image you might have some scenario where you've got multiple concurrent versions of the app running at the same time or you're doing AB testing or whatever it is. You don't have to worry about collisions between rules. You don't have to try to genericize your rules for the least common denominator. You just simply say, allow us to learn about each part of the application, each version of the application and correlate the model to the specific version that's in question at any given point in time. And again, that's something that historically has been really hard to do for a security team. What that allows us to do then to kind of as the follow-through is instead of you having this very kind of reactive model where you're kind of constantly kind of chasing around your environment trying to figure out what scenarios I want to block. What are the things that are known to be bad in the environment? What are the things that I want to prevent like the black list approach to security? Like here's a list of IP addresses that are bad and ports that are bad and files that are bad and so forth. We can transition now to a model where we say we know what is the known good state and thus anything that's outside of that known good model is just inherently anomalous. We don't have to worry about do we have a signature that says that Netcat is an evil binary to run. We know that there's no scenario in which Netcat should be running inside of my web application, right? If that happens, that's a bad thing. I don't have to program a security tool to know a list of what's good and bad ahead of time because I'm changing the posture from being something that is a allow by default but alert based on some kind of black listing to instead a posture where we're saying only allow these things that are known good that are described in this model to actually run and anything else that's outside of that model is automatically looked at as anomalous. And that's a really big shift from a security standpoint because it allows you to have a much more application tailored way of protecting the apps that you're running. Instead of the protection being sort of genericized and again, targeting towards kind of the least common denominator that's equivalent in your environment, you're gonna simply say I want to only allow the things that are known good to run within this container, within this cluster, in this environment and to be able to do so in a way that doesn't require you have to set up those rules manually each time. At the end of all that, because we have this data and because we know what the actual known good activities are, we can be a lot more precise and a lot more rapid in our ability to identify threats and anomalies and to take action on them, including relatively sophisticated compound sort of threats in which you're not just simply gonna be looking for, hey, a process ran that wasn't gonna run, but what's the entire series of events that occurs after that? So what you're looking at is a feature we, you know, we have in Twistlock, but again, it's something that's sort of just generically could be built by anyone, which is to say, if I have an application and that application, now, you know, I see a process that runs in there that's unexpected. We know that that application should not be running NetCAP, but let's monitor and see what else is gonna occur there. Let's see kind of the sequence of activities. I can of course just discreetly stop, you know, by rule in Twistlock, I can just say, in general, I wanna always prevent any binary that's not supposed to run from ever being invoked in the first place, like absolutely just prevent it from starting. As with most security tools, a lot of people start off with more of a monitoring approach there and that's where we have this notion of an incident explorer where when that binary runs, we don't just like hammer you with alerts and say, hey, Netcat ran and it wasn't supposed to run, we correlate that data together with a whole sequence of events that occurs afterwards. So not only do we just simply say, hey, Netcat ran here, but we say it ran, it connected to this IP address externally that again, based on that malware feed or that IP rep list we know is suspicious, it downloaded this file that ended up being a binary, it ran this new binary that shouldn't be there, it turned out that binary was actually in-map, in-map port scan the rest of the network and then it stored the results in this file that looked like it was supposed to be something else within the user profile. That sequence of events, historically it's been really hard to pull together because you were trying to look at like lots of different sensors, right? Historically, some of that data would be an IDS, some of it might be an anti-malware system, some of it might only exist in an egress proxy, all that data now can be local to the application and because it's local to the application, we can make decisions on it more rapidly and we can actually make sure that those decisions are more accurate than they've been in the past because we know much more deterministically what the application itself should be doing. So that incident approach is kind of the second step, so you can imagine that happens within a given container so to speak, but we also have the ability to correlate those models across your environment and what I mean by that is think about the kinds of behaviors that you can model not just within a given container but across all the containers that you might have in your environment such that we can learn the directions and ports and types of traffic that are being sent between each individual microservice that you have. So if you've got, as you typically would, if you build and deploy applications beyond that Hello World sort of example, you're gonna have an application that has multiple different components, right? It's got a persistence tier, it's got a web front in, it's got caching and those are all decomposed into their own microservices. What you're looking at here, it's kind of blurry on this display, but what you're looking at here is actually the Kubernetes Sockshop application which you guys are probably all familiar with. I don't know why they chose Socks, I find that very bizarre that they didn't choose the Pet Shop thing that everybody else has done, but in any case, the Sockshop application here, the thing that's really cool about this is that Visio, I call it, that you see there, that view that you created, or that you see there, is something that's completely dynamically created by software. Nobody had to go into Twistlock and say, hey, the orders front-end is the only thing that should be able to talk to the orders database, and it should only be able to talk over the port that MySQL uses. And this other thing that receives something about payment information should only be able to talk to Redis and overly our particular port. The entire connectivity mesh that you see here can be dynamically learned and those models of individual entities, the individual images and containers that run those images and how they operate internally, those models can be correlated together such we create this supermodel mesh that describes how they communicate with each other and how the entire system works together. So instead of you again having to go through and say, hey, I wanna be able to protect an individual part of my application or say that this given container is only able to run Apache, not only are we able to do that, but we're able to understand the context of how it should interact with all the other microservices that are part of the overall app that it composes. And again, this is something that's been historically really hard to do from a security standpoint. If you were a traditional, or you're operating a traditional pattern where this is all based on virtual machines, you probably don't know what the connectivity flows are between all the different components of your application. There's not a good way to model that. Even today, if you're doing that, and even today if you've got tooling and firewalls or doing east-west firewall traffic within your data center, they're almost certainly not gonna be able to see and to work with what you're doing within containers. Because containers, regardless of the orchestrator, you're basically creating some giant layer three SDN that's out there that spans multiple nodes, that's all encapsulated, all in crypto traffic. So your historical physical firewalls or devices that you're using to segment things based on VLANs, those things don't see into that, right? There's no visibility in there in the first place. Even if they're enlightened to that, you still don't wanna be in a situation where you're having to say, I wanna deploy this part of my microservice into this particular VLAN and this thing into this other VLAN. That's the old way of doing it. And it really creates a lot of friction for adopting these new application deployment technologies like Kubernetes and all the CI stuff that you wanna do as you're deploying and building apps in this modern way. And so this ability to be able to correlate applications, the images that compose those applications to create models for them and to have them work together in this super set, this super model way of being able to describe their connectivity mesh is a really powerful thing. Because what we can do once we understand this flow is we can ensure that only the specific traffic allowed as described in this flow is actually able to traverse the network. So for example, in this model, if somehow I compromise one of those front end containers that's public facing in this case, rather than me now having an item that I can use to attack the entirety of the rest of that microservice or maybe all microservices, depending on how you break things up with namespaces, rather than me having the ability to go out there and start port scanning everything, the only thing that I can send traffic to is whatever normal resource I can usually talk to and I can only talk to that resource over whatever port and protocols normally allowed there. So yeah, maybe I could still exploit it if I find out some other flaw there, but the scope of what I can do is much more narrow, right? You know, they have a notion of security of a blast radius. You've basically taken a blast radius now of saying like my entire namespace might be compromised or perhaps even my entire cluster. And now you're saying like this one individual slices, the ability for somebody to leave that individual slice and go elsewhere is over only a very specific path that is a known good path. And by the way, wherever they're getting to, we still have other layers of the fence and depth there because what's the first thing somebody is gonna do once they exploit that front end web app? They're gonna try to get a shell on it, right? And once they get that shell, what are they gonna do next? They're gonna try to download some kind of kit. They're gonna try to run something. They're gonna try to do something else to understand the environment that they're operating in, which means they're gonna try to run Netcat. They're gonna try to run in that. They're gonna try to run the same kinds of tools that are out there. And again, unlike the traditional world in which you hope that you have a signature for whatever kind of crazy, separately compiled version of Netcat that they have that doesn't have any malware signatures and the sources that you're using, you don't care where it's from or what the signature is that's associated with that. You know it's bad simply because Netcat should never run. Nothing other than HTTP should run. And that's again a fundamental shift in the way that you're able to secure your application because you're now not having to have some sort of predetermined list of what's allowed and not. You just simply say this is all that's good and if it's not good, don't want it to run. And so this sort of pattern and these changes that you see really enable this new world model of security that we talk about in which the models begin being developed at the very beginning of the CI process. Literally as soon as you build your application, you know you commit that app in Jenkins, we're able to start learning the fundamentals of how it works. Not everything, right? Because a lot of that's only static until it actually runs. You can't understand the networking connectivity and stuff but you can start learning things like the binaries that are run within it and so forth. That model that's created there is custom tailored for each individual app. Like I was saying earlier, if I build my app four times a day, every time I build my app I get a unique model calculated for each bill that reflects the specific binaries within that bill, the specific network connectivity that that bill requires. And that as I deploy my application, I don't have to change my operational processes to accommodate the security side of it. I just simply deploy it using the same coon control commands that I normally would. I use the same sort of yaml that I normally would. I don't have to go in there and annotate anything. I don't have to talk to a security team member. I just simply say, I want to deploy this application. The software has learned what is good. It's learned how those good things are supposed to talk to each other. It shows you that and then it protects you for that automatically. And that's one of the fundamental differences that containers enable us to do to enable this new world of security. So I sort of lied, there is like a small pitch about Twislack, I want to talk about it. We do a lot of stuff around protecting your containers and your cloud native apps in general. Again, Booth, G, whatever that was, 20 something or another. In the back, oh, I almost forgot. We are happy to announce we are actually the first, as of yesterday, Kubernetes certified security platform. So we got certified through CNCF earlier this week. And we're the first and best for our own lane security platform that's certified for Kubernetes. So we'd love to speak with you guys. We've done a lot of work in the open source, just the last slide, I won't really talk through all this. But if you're using Docker, you're actually already using Twislack. We built the authorization framework that ships in Docker. We built all the Plugable Secrets Management stuff that's in Swarm. We wrote the NIST Special Pub, the Container Security Guide, SP-800190. So if you work with any federal agencies or anybody that has to comply with FISMA, we're all over that stuff as well. But we'll have to work with you guys and be happy to show you the product and talk more about this whole notion of cloud native cybersecurity. And that's about it. If you have any questions, I'll be happy to answer them. Thank you. Or pentose and stuff. Not quite that long ago, but still. So when you say that you're basically machine learning what all the good components are and how they talk to each other, the thing that comes to my mind is, what if you already have some bad things in the environment? Somebody from China has already compromised your environment and Twislack goes in and says, oh, it's perfectly normal that somebody from China connects to this environment with badminton credentials every 20 minutes. Sure, sure. Is there any defense against that? Or do you kind of put that off on? No, you kind of have to know how your application. It's a good question. What if your application or what if your environment is already compromised? I would tell you that what we do in our implementation, the models are not the only way that you apply runtime defense. We also have this notion of rules in which all the stuff you see in a model, you can also effectively hard code and genericize and say it applies to any image called Apache Star, for example. And then at runtime what we do is we combine the inherent whitelist that's in the model with whatever you explicitly allow in the rule and then minus whatever you explicitly block in the rule and that's the effective policy. And the reason I bring that up is if you wanted to say, and some of our customers do, I know for sure I never want to allow a particular binary to run or a particular thing to run and you say either because I don't trust the model or because I don't know what the environment's like already, you can tell us that and we will enforce that alongside the stuff we have dynamically learned about your environment. But the core of your problem is what if my environment is already compromised? And I think honestly if I were to answer that or any security vendor were to answer that honestly, if you're already compromised, you're really limited in what you can do to effectively protect it, right? Because even if I gave some ability to say like, hey, we saw this admin account running or something like that that shouldn't be there, the data that you have is still subject to whatever sort of compromises of the underlying systems that are out there. So if your environment's already taken over, really that's the first thing that you need to do is to get rid of that versus being able to have a tool like ours or any others to go in there to protect it. Do you have or have you thought about having something like a pattern library where when Quizlock does an analysis of an environment and says, oh, I know from my pattern library, this is a suspicious pattern, do you really want to do this? We absolutely do. In fact, at DockerCon EU we presented with actually David Lawrence, who was doing the session here earlier from Docker, about some real-world research that we've done where we have seen in one example, registries that were configured for anonymous access that were exposed to the internet where bad guys were compromising the images in those registries that were then being deployed into customers' production environments and doing all kinds of stuff that weren't expected, right? So one of the things we've added into our product is based on that research and the kinds of attacks we've seen, like a lot of crypto mining as an example, we've created particular heuristics that Incident Explorer that say when we see particular patterns in there, don't assume that just because this image started off, running a crypto miner that that's okay. Like that's probably still not okay and we're still gonna tell you about that. But that's really where you have to have a combination of both machine learning and some human intelligence to be able to correlate those two things together. Yeah, in the back. You, yes. Thanks. Absolutely. We actually use that. I mean, one of the things that we do in the platform is make sure that you have the ability to specify what images and what sources of provenance are trustworthy to run within the environment. Absolutely. And you especially see that when people take an application that's a traditional app and they just kind of port it into containers and the app has like some kind of cron job type thing that runs every two days or whatever. Absolutely. So that's one of the, like honestly, that's kind of the secret soft type stuff that we've added. Like everything I talked to you about is like very logical theoretical stuff where the rubber really hits the road with it for you to be able to operationalize an approach like this. You have to have the logic to be able to see when those things happen and to have the right kind of like back off patterns and enhancement patterns to the model so that you're not constantly bombarding people by saying like, hey, this thing ran this process that wasn't expected. It's normal for it to do this every 24 hours. So we've added a lot of just general intelligence to the way that we create and curate the model to be able to take those things into account. We constantly tune it because it's always a challenge. The time-based part is not, well, it sort of is, but for the most part, what happens for us, the way that we've been able to develop the model is about one hour of cumulative runtime. So it's not an hour of the same thing running on the same host for an hour. It's basically an hour of cumulative runtime. Now, we do have some software triggers in there that can lengthen that, that can shorten it depending on some other heuristics, but in general, after you deploy the application, within an hour, the model will be in active mode and no longer learning. That takes me to a great question, like what do attackers do in a container-based environment? So our research team, Twistluck Labs, we found multiple ODes and Alpine and MIMCASHD, and so we have a really legit set of researchers that look at this. And one of the things that's really interesting to me is the patterns that attackers use at least today are not fundamentally different whether they're in a container or not, because the tooling is really not that different in all honesty. Like, you're gonna do the same thing whether you're in a container or a VM or a bare metal host. Like, the container doesn't help or hinder you very much when you're an attacker. The first thing that you're gonna try to do is to get some kind of shell that you can use to elevate privilege and then to do recon on the rest of the environment. And it doesn't matter if you're in a container or not in a container. Containers, like I said, this is one place where the technology gives some advantage to the defender rather than to the attacker, because again, we have that better ability to determine that that's a wrong, yeah, exactly, exactly. So we basically have three levels of response in the product. One is just alerting, it's just logging on it for it. We have one that we call... Sorry, what were you gonna do on it? Yeah, we have one that we call Prevent, which basically says the discrete activity that's attempted that's not consistent with the model is prevented. Like, for example, normally your app runs to rights like Varlib, Foo, and now it tries to write to user bin, right? Just like literally prevent the file system, right? Or it's, you know, your app normally runs Apache and suddenly it tries to run Netcat, just literally prevent that process from swanning. But not touch the rest of the container. The other kind of the further extreme that we have is what we call block, which stops the entire container. And I think of that as like the more, arguably the more secure way to do it, because you shouldn't have a scenario in which your application's trying to run binary as it should, like that's indicative of like, maybe the app itself is compromised, you know, the guy can't get beyond the app itself to be able to spawn that shell or whatever else, but the app is still, you know, suspicious. So if you, when we do block, what we do is we actually stop the container, we leave the file system in a forensics ready state, we prevent further instances of that container from being started, because you know, if we just stopped it, Kubernetes would start at someplace else. So we prevent that from starting it anywhere else. And that's kind of the spectrum. So alert, prevent, block, or basically the modes we have. All of them. So what's your, what's your background? Yeah, the CVE part's like a totally different discussion. Like vulnerability analysis and prevention is a big part of our platform. It's not really what I talked about today at all, but one of the things that we think is really important, and we take some data from runtime for this is, rather than just simply giving you a list, it says this image has all these vulnerabilities. In every environment that uses containers at reasonable scale, you're gonna have hundreds of images and thousands of vulnerabilities. Like you cannot remediate every one of them. So what we do is we give a risk score to every vulnerability that we find in your images and we stack rank it based on that. And that risk score takes into account not just the CVSS score, but the attack factor, the attack complexity, and also specific runtime characteristics in your environment. Like, is that container connected to the network? Does it run as privileged? Does it have an SE Linux profile? And that allows us to be able to say like, hey, maybe in your environment, the CVE that's a nine in other places is actually like a five because it only affects the container that's not even connected to the network. So that's less critical for you to fix than this other vulnerability that's on a web-facing thing that gives somebody privileged access to the hose, for example. Yes? You mentioned stopping the container with block action. When I was at CoroS and Matthew Garrett was still there, he did some work in the rocket runtime based around, his specific case was looking for containers that are trying to do privileged escalations. And that was his default action was when you see that stop container. Is that work that you guys were aware of or did that feed into some of the stuff? I wasn't aware of that work. No, I mean, no, I wasn't aware of it. But I mean, it's a logical thing. We think it's a good thing. We did it ourselves as well. But no, I wasn't aware of that actually. That's the obvious thing to do. Yeah, yeah, absolutely. Well, like I said, we're at booth G28, I think or somewhere back there. Thank you guys very much for coming. I really appreciate it. And hopefully we'll see you later.