 The Cube presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the CloudNative Computing Foundation and its ecosystem partners. Welcome to Valencia, Spain. In KubeCon and CloudNativeCon Europe 2022, I'm Keith Townsend along with my host Paul Gillan, Senior Editor, Enterprise Architecture at SiliconANGLE. We are continuing the conversation here at KubeCon, CloudNativeCon around security, app defense. Paul, were you aware of this many security challenges and that were native to, like, CloudNative? Well, there's security challenges with every new technology. And as we heard today from some of our earlier guests, containers and Kubernetes naturally introduce new variables in the landscape and that creates the potential for vulnerabilities. So there's a whole industry that's evolving around that. And what we've been looking at today, yesterday we talked very much about managing Kubernetes. Today we're talking about many of the nuances of building a Kubernetes-based environment and security is clearly one of them. So welcome our guests, Owen Garrett, head of products. Thank you. And community at deep fence. You know, I'm going to start out the question with a pretty interesting security at scale is one of your tight taglines. Absolutely. What does that mean exactly? So Kubernetes is all about scale. Securing applications in Kubernetes is a completely different game to securing your traditional monolithic legacy enterprise applications. Kubernetes grows, it scales, it's elastic and the perimeter around a Kubernetes application is very, very porous. There are lots of entry points. So you can't think about securing a cloud native application the way that you might have secured a monolith. Securing a monolith is like securing a castle. You build a wall around it, you put guards on the gate, you control who comes in and out and job is more or less done. Securing a cloud native application, it's like securing a city. People are roaming through the city without checks and balances. There are lots of services in the city that you've got to check and monitor. It's extremely porous. So all of the security problems in Kubernetes with cloud native applications, they're amplified by scale. The size of the application, the number of nodes and the complexity of the application and the way that it's built and delivered. That's kind of a chilling phrase. The perimeter is porous. Companies are adopting Kubernetes right now, evidently bringing in all of these new vulnerability points. Do they know what they're getting into? Many don't. There's a huge amount of work around trying to help organizations make the transition from thinking about applications as single components to thinking about them as microservices with multiple little components. It's a really essential step because that's what allows businesses to evolve, to digitize, to deliver services using APIs, and mobile apps. So it's a necessary technical change, but it brings with it lots of challenges. And security is one of those biggest challenges. So I'm thinking about that porous nature. I can't help but think, if I have my traditional IPS, does a really great job of blocking that centralized data center and access to that centralized data center. As I think about that city example that you gave me, I'm thinking, you know what, I have intruders, or not even intruders, but I have bad actors within my city. You do. How does DeepFence help protect me from those bad actors that are inside or roaming the city? So this is the wonderful, unique technology we have within DeepFence. So we install little sensors, little lightweight sensors on each host that's running your application, on Kubernetes nodes, as a daemon set, against Fargate instances, on Docker hosts, on bare metal. And those sensors install little taps into the network using EBPF, and they monitor the workloads. So it's a little bit like having CCTV cameras throughout your city, tracking what's happening. There are a lot of solutions which will look at what happens on a workload. Traditional XDR solutions that look for things like process changes or file system changes. And we gather those signals, indicators of compromise, but those alone are too little, too late. They tell you that a breach has probably already happened. What DeepFence does is we also look at the network. We gather network signals. We can see someone using a reconnaissance tool, roaming through your application, sending probe traffic to try and find weak points. We can see them then elevating the level of attack and trying to weaponize a particular exploit that they might have found or vulnerability that they find. We can see everything that comes in to each of the components, not just at the perimeter, but right inside your application. We see what happens in those components, process, file integrity changes, and we see what comes out. Attempts to exfiltrate something that looks like a database file or et cetera, password. And we put all of these little subtle signals, the indicators of attack, the network-based signals, and the indicators of compromise. We put those together and we build a picture of the threats against each of the workloads in your cloud native application. There's lots and lots of background, recon traffic. We see that, you generally don't need to worry about that. It's just noise. But as that elevates and you see evidence of exploits and lateral spread, we identify that. We'll let you know or we can step in and we can proactively block the behavior that's causing those problems. So we can stop someone from accessing a component or if a component's compromised, we can freeze it and restart it. And this is a key part of the technology within our threat striker, security observability platform. False alerts are the bane of the security of Ministers of Existence. What do you do to protect against those? So we use a range of heuristics and a small degree of machine learning to try and piece together what's happening. It's a complicated picture. So some of your viewers will have heard of the MITRE attack matrix. So a dictionary of techniques and tactics and protocols that attackers might use in order to attack an infrastructure. So we gather the signals, those TTPs, and we then build a model to try and understand how those little signals piece together. So maybe there's a guy with a striped vest that is trying the doors in your city. A low level criminal who isn't getting anywhere will pick that up and that's low risk. But then if we see that person infiltrate a building because they find an open door, then that raises the level of risk. So we monitor the growing level of risk against each workload. And once it hits a level of concern, then we let you know, but you can then forensically go back in time and look at all of the signals that surround that. So we don't just tell you there was an alert and a file was compromised in your workload, do something about it. We tell you the file was compromised and prior to that, there were these events, process failures, those could have been caused by network events that are correlated to a vulnerability that we know and those in turn could have been discovered by recon traffic. So we help you build that entire active picture up. Every application is different. You need to have the context to understand and interpret signals that a solution like Threat Striker gives you and we give you that context. So I will push back. If I'm a platform team, you know what, I have a service mesh. I have trusted traffic going to truck, trusted traffic going from trusted sources. I'm cutting off the problem even before it happens. Why should I use DeepFix? So a service mesh won't cut off the problem. It'll just hide the problem because a service mesh will just encrypt the traffic between each of the components. It doesn't stop the bad traffic flowing. If a component is compromised, people can still talk to another component and the service mesh happily encrypts it and hides it. What we do, we love service meshes because we can decrypt the traffic or we can inspect the individual application components before they talk to the mesh sidecar. So we can pull out and see the plain text traffic. We can identify things that other tools wouldn't have a hope of identifying. So, you know, you just triggered something. Yeah. A lot of companies do not like decrypting that traffic after it's been sent. They don't want anyone else excluding security tools to see it. How do you ensure, how do you serve those clients? So we serve those clients by having an architecture that sits entirely on premise in their infrastructure. Their sensitive data never leaves their network, their VPCs, their boundary. They install the ThreatStriker console. So this is the tool that does all of the analysis and makes the protection decisions. They run that themselves. They deploy the ThreatStriker sensors in their production environment. They talk over secure links authenticated to the console. So everything sits within their power view, their level of, their degree of control. So if they're building a cloud application though or a hybrid cloud application, how do you connect, how do you deal with the cloud side? So whether their production environments are next to the ThreatStriker console, whether they're running on remote clouds, our sensors will run in all of those environments and the console will manage a complex hybrid environment. It will show you traffic running in your Kubernetes cluster in AWS, traffic running on your VMs on Google, traffic running in your Fargate instances on, again, on AWS and on your on-prem instances. It gathers that data securely from each of those remote places, sends it to the console that you own and operate securely. So you have full control over what is captured. It's encrypted, it's authenticated, it's streamed back. So it never leaves your level of control. Talk to me about the overhead. How is this deployed and managed with MI environment? So there are two components as we've learned. We have the console. All of the work is done on the console. Any necessary decryption, all the calculation, that runs on a Kubernetes cluster that you would deploy, that you would scale, so that's fully in your control. Then you need to install little sensors on each of your production environments to bring the data back to the console. Now those are on pods or are those in running inside of containers themselves? So they are container-based. They're typically deployed as a demon set. So one instance per node in your Kubernetes cluster, they are, we have put a lot of engineering work into making those as lightweight as possible. They do very little analysis themselves. They do a little bit of pre-filtering of network traffic to reduce the bandwidth, and then they pass the packets back to the management console. So our goal is to have the minimal impact on customers' production environments so that they can scale and operate without an impact on the performance or availability of their applications. And we have customers who are monitoring services running on literally thousands of Kubernetes nodes and streaming the data back to their management console and using that to analyze from a single point of control what's going on in their applications. We hear time and again, CIOs complain that they have too many point security products. I think the average of 87 in the enterprise, according to one survey, aren't you just another? And that is the big challenge with security. There is no silver bullet product that will secure everything that you have. You have your, what you're securing scales over space from your infrastructure to the containers and the workloads and the application code. It scales over time. Are you putting security measures in at shift left, development, when you deploy, or are you securing production? And it scales over the environments. There is no silver bullet that will provide best to breed security across that entire set of dimensions. There are large organizations that will present you with holistic solutions, which are a bunch of different solutions with the same logo on them, bundled together under the same umbrella. Those don't necessarily solve the problem. You need to understand the risks that your organization is faced. And then what are the best to breed solutions for each of those risks and for the life cycle of your application? A deep fence, we are about securing your production environment. Your developers have built applications. They've secured those applications using tools like SNCC and they've ticked and signed off saying, with a list of documented vulnerabilities, my application is secure. It's now ready to go into production. But when I talk to application security people, I talk to ops people and I say, are the applications in your Kubernetes environment, are they secure? They say, honestly, I don't know. The developers have signed off something, but that's not what I'm running. I've had to inject things into the application, so it's different. There could have been issues that were discovered after the developer signed it off. The developers made exceptions, but also 60, 80% of the code I'm running in production didn't come from my development team. It's infrastructure. It's third party modules. So when you look at security as a whole, you realize there are so many axes that you have to consider. There are so many points along these axes, and you need to figure out in a kind of a then diagram fashion, how are you going to address security issues at each of those points? So when it comes to production security, if you want a best-of-breed solution for finding vulnerabilities in your production environment, ThreatMapper Open Source will do that, and then for monitoring attack behavior, ThreatStriker Enterprise will do that, but DeepFence is a great set of solutions to look at. So on, thanks for stopping by. Security at Layers is a repetitive thing that we hear security experts talk about. Not one solution will solve every problem when it comes to security. From Valencia, Spain, I'm Keith Townsend, along with Paul Gillan, and you're watching The Cube, the leader in high-tech coverage.