 have a go. All right, ladies and gentlemen, welcome to the Wednesday episode of the OpenShift Commons Briefings Operator Hours. My name is Michael Waite from Red Hat. Bill at Red Hat been here forever. And today, we are fortunate enough to have with us some really great friends from New Vectra. And if the screen share is working, you can see that we have Tracy Walker, who's a Senior Solutions Engineer. And I'm told that he's quite shy, so we're going to see what we can do to get him to talk up here with more than one-word answers today. We have Glenn Koska, longtime partner of ours, VP of Product Management from New Vectra, and then our very own Cameron Skidmore, who's an essay from Red Hat. How's everybody going today? Doing great, Mike and Cameron. Thanks for having us. Excellent. Cameron, what's that? Tracy, did you say a one-word answer? Yes, I said hi. Cameron, we were talking while we were getting ready to get started here. I was complaining about the 80-degree weather up here in northern New Hampshire, and you were like, oh, that must be nice. Where are you joining us from today, Cameron? Sunday, North Carolina right now. Last week, it was about 100 without the humidity of 80%. So, 80 degrees sounds so nice right now. Very jealous. Yeah. Is that normal for now in there this time of year? It is. It is. And it's just something you kind of accept. I remember growing up playing soccer, and it was just always going to be about 100 out in the summer. You kind of get used to it, but you never enjoy it. I guess that's how I would describe it. Not to make this be about weather, but when I was in the Army a long time ago, I went to basic training down to Fort Benning, Georgia, and it was in July and August. That was just amazingly hot and wet. Anyways, what do you do at Red Hat, Cameron? Tell us a little bit about yourself while we have you front and center. Yeah, sure thing. I'm a partner solutions architect. So, I work with our global ISV partners, Case and Point, New Vector, the friends we have on our call today, and we do joint solution creation together. We work on integrating our technologies together and spreading the good word about the partner ecosystem at Red Hat. Okay. Glenn, how about yourself? Why don't we start with you? Let's take beauty before age. Well, I run product management and a number of other technical functions at New Vector. As you mentioned, we've known each other for quite a while. So, it was with the company when we launched five years ago. At that point, we focus mainly on runtime security, but currently, we've expanded that product line to really cover that whole life cycle. I have a telephone call ringing and then, sorry about that. I'm not going to get up and run across the room. We're just going to have to do, apparently my, I'm guessing my warranty has expired on my vehicle and I need to buy GameStop gift cards or something. Glenn, so how long have you been at New Vector? You said like five years? Yeah, too long to remember, but back in the old days, Docker and DockerCon was the big hype and now that's pretty much gone and it's all Kubernetes and OpenShift these days. Yeah. I actually, I'm going to ask Tracy about that, about the whole Docker thing, but speaking of Tracy, why don't we find out about you? I see someone lurking over your shoulder there, Tracy. What do you do at New Vector? Yeah, I'm a senior solutions engineer, work with our customers and prospective customers to help them understand how they can use New Vector to secure their OpenShift and Kubernetes workloads. Now, is that a staged background? Is this one of these, is that one of those fancy screen overlays? No, that was this is actually pre the boom of COVID and online calls and virtualization and all that, but no, that's just the back of my office. That's the only place my wife will allow me to have my Darth Vader. She doesn't like it. So that's actually Pan Solo when he got frozen at the end of, what was it, episode five? That's right. Wow. Yes. Yes, you got it. Yes. The Pan Solo. Yeah. That's real and 3D. Is it? It looks much better on the zoom calls than it does the, because it's just a, I think it's a fat, it's a fat head poster, but it fits that door. So it's kind of a plain door. So, so yeah, it seemed appropriate with everything else I got going on. So I think my favorite line was saying like, what's that? I said, I think my favorite line from any movie is, is right after he gets thought out of the carbonite and you can't see and he asked Luke, he says, how are we doing? And Luke goes, same as always. And he says that bad, huh? Exactly. All right. So new vector, new vector, N-E-U-V-E-C-T was that two words, one word? What like your security company? Is it like endpoint security? Container and Kubernetes and OpenShift security. And it's the name actually comes from new threat vectors, right? In a containerized environment, you have new attack surfaces, it could be the orchestrator, it could be the runtime, or it just could be the container workloads that are running. So that's really what we do. Focus on. Gotcha. And how long has the company been around? About six years. So, okay, you know, where your typical Silicon Valley company, a couple of co-founders had a great idea, came coming from VMware and friend Micro and Fortinet to combine network security with virtualization and endpoint security. And in a Kubernetes environment, it's fortunate for security firms and customers, because all of that is virtualized, right? The network is virtualized, the application layer is virtualized, so you can really see the network, you can see what's going on in containers, you can see the host. So it's really been an exciting time to see Kubernetes grow and to see companies go into, you know, full-scale production with their new pipelines. Yeah. Yeah. So you folks are here today, this isn't by chance, right? You know, you folks have a Red Hat certified container. You've actually been working with us for years on being on board with that, and you folks have an operator. It's Red Hat certified. So we put on these, you know, we invite our partners like yourselves to come here and join us. And today, we're going to be talking about network packet inspection as one of the topics, but mostly the deep packet inspection. What is deep packet inspection? Well, let Tracy get into that. That's a topic very near and dear to his heart. And so I'll just start it off really with some humor and say this isn't light or shallow packet inspection. This is deep. Tracy, what do you got to say about that? A deep packet inspection means that we are looking at the actual network traffic as our source of truth, first and foremost. We're able to see the network payloads. We're able to identify the layer seven protocol. We're able to validate that layer seven protocol, which means we have protocol decoders that are able to make sure that there's no tunneling attacks or anything in the network. And because of our ability to do that deep packet inspection and use the network as our source of truth, and we're not using Berkeley packet filtering, we're not using kernel shims, we're not using IP table changes in Kubernetes or OpenShift, we're actually just looking at the network traffic. And that using that source of truth means we're 100% accurate. And it also means that we can then identify and block things at the network label layer. So that deep packet inspection really means we've taken a concept that has been used in firewalls and things like that and brought it to inside of your OpenShift clusters. Now you said OpenShift clusters. What about for companies that are not yet moving to multi-cloud that are still running workloads in their data centers? Do you folks have a fit there as well? Do the applications have to be containerized and running in a distributed hybrid model in order for the threat to be valid enough where they would need to use the new vector technology? I guess that's my long-winded question. I'll take a first swing and then let Glenn correct me. But yeah, we work with containerized applications, so it doesn't have to be using an orchestrator, but they do need to be containerized and it needs to be a containerized environment, so Docker or OpenShift, etc. And any flavor of Kubernetes, we work with Rancher, OpenShift, and various other flavors of Kubernetes, but as a whole that's, and it is because of those environments because sometimes those containers or those clusters are exposed directly to the internet, they get that traffic directly, and that's why it's important to be able to secure those workloads in their environment. Perimeter, established perimeter security in firewalls and things like that are usually adequate for traditional workloads, but if you're running a containerized workload in containers, whether it's on-prem in an air-gapped environment, which we have many customers that do that, or if you're running in a cloud environment, on-prem doesn't matter, we work inside of those environments, inside of Kubernetes as well as in those containerized environments and that's where we do the, that's how we, where we do the security. Okay, so like, what's the, what's the net impact of, you know, network packet deep inspection? Like, like, what, what, what should people or customers specifically be worried about from when running containers in production? Well, sure, I'll, I'll continue. Well, they should be worried about the same things that they've always been worried. They should be worried about someone gaining access via the network. Every exploit in the world is a combination of network access and processes, right? They're going to get access to it via the network, they're going to execute processes somehow, they're trying to get information back, they're trying to wreak havoc, they're trying to explore to find other ways of, you know, getting reverse shells or et cetera, et cetera. So the threats are kind of the same, you know, the risks are essentially the same, but because you have these newer technology environments where it's containers and lots of containers and your orchestrating applications within, you know, microservices and all of this kind of thing, that has exposed new threat vectors, as Glen mentioned, and new ways of getting into those environments and attacking those environments that can maybe bypass their existing perimeter security because it can't see inside of Kubernetes open shift traffic. And that's where we bring that capability back, right? You lose some of that visibility when you're using an orchestrator, because that's what it's abstracting away, the complexities of managing the network between hundreds, if not thousands of containers. So you lose a little bit of that by, you know, you gain the simplicity and the orchestration and the health checks and all that, but you lose visibility of what's really happening. And can we detect something being, you know, if you've got an application exposed, if somebody's trying to do cross-scripting or trying to, you know, find a way to exploit your environment, that's what we give back. And because we're using the network as a source of truth, that means we can use a lot of automation so that increasing security doesn't mean also creating a bunch of stuff people have to do. We can automate so much of it that you can virtually do everything without having to do lots of configuration and stuff like that. Got you. So this may not be a surprise to you, but we are live going out over Twitch and on YouTube. And so if anyone out there in the World Wide Web wants to play Stump Tracy Day, he's up for it. Apparently, there's no question that he can't answer or at least come up with a reasonable facsimile of. So if you have any questions for Tracy or Glenn or Cameron or, well, you don't need to ask me any questions, drop them in the chat and our bot will drop them over here into the bridge here. You said you brought up the word microservices. Is microservices still buzzword bingo? And let me clarify what I mean. Like everyone has been talking about, well, containers are getting smaller and smaller and smaller, and they're getting into microservices. And then that's why you need to have tools like a service mesh and so forth to be able to manage all these tiny, tiny little containers all over the place. But I was on a call or actually we were doing one of our TV shows. It was about a month and a half ago. And somebody popped in and said, well, that's not what we're seeing inside our environment. We're actually seeing our containers getting bigger and bigger and bigger. And matter of fact, they have some that are, you know, like I think he says like a terabyte in size. Is that an anomaly? Lakeland? Is microservices real or why are people sharing with me that their containers are actually getting bigger instead of smaller? Well, certainly if they're just trying to lift and shift existing applications, you can just take a big old monolithic app and, you know, put it in a container and run it. So I think companies are just rushing to get going on this new cloud infrastructure by doing that. And then, but, you know, I think what everybody is striving for is whether they call it microservices or not, it's really, you know, separating out their, the parts of their applications so that different development teams can have, you know, their own unique pipelines. So if I want to update just the customer address part of my application, I can do that and I can add new features to it without having to release and test this entire big application. One of the things that I think is most critical to understand is that, you know, we're moving to a model where there's automated pipelines and security is being built into those pipelines. In order to really enable security teams to fit into that, you got to have kind of a microservices-based approach, right? When your application team develops its little app widget, they can really understand what's going to happen in there and they can declare the allowed behavior of their app once it gets into production and then it can be locked down. If you have a big huge monolithic app, it's much more difficult to do that and to take advantage of that modern security pipeline. So, you know, that's kind of the goal, I think of many companies, but I think what you may be seeing is that once you get your first applications done, there is a tendency to have application creep, you know, that as you add more features and features and features, they get bigger and bigger and bigger. That's probably just a natural human tendency that's going to occur unless you have a very disciplined architecture team. Okay. All right, I'm actually just writing down a note. I want to ask Tracy a question, but I want to wait till after the demo. It seems like being here at Red Hat, you know, we have Cameron and Dave Muir and Aaron Levy and all these people that manage our relationships with security companies. It seems like there's just a ton of security companies. How many different, how do I phrase this? Like, how many are needed? Like, are they just waiting out there, they're all saying, I'm a security company for multi-cloud or, you know, do you really need to have 40 different security companies in order to be able to protect your production environment? I don't know if that's for Tracy or Glenn or Cameron, but what are your thoughts? You know, I've been through a lot of evolutions in security. So whether it's endpoint security and firewalls, you know, we had a dozen firewall companies, EDR companies, and so you need that healthy competition to really spur innovation and growth and customers need choices. But having said that, I will say that, you know, because the Kubernetes and containers are abstracting things, for the first time, you can really, in a single security tool, see the network. You can see what previously was called endpoint security. So you can see what's happening either on the host or in the container, the process and file activity. You can do vulnerability management and because it's all part of the same pipeline and the same infrastructure. Previously, you had to have three or four or five different security vendors. You know, you need a firewall vendor, you needed endpoint security, you needed malware, you needed vulnerability management, you needed separate companies, teams and processes for that. So the good news is, you're, you know, as things become more virtualized at the stack, you can actually start to see and look for combining a lot of those previously separate security functions. Okay. Well, I think you folks wanted to have an opportunity to show us how it works. Is that right, Tracy? Did you prepare a technology demonstration for us here today? I did. I do have a technology. Yeah, well, it gets. Yeah, while you're pulling that up, Tracy, I mean, we're really happy about today's topic and this month's focus is really on the network and network controls. And the reason we're excited about that is, you know, you bring up that there's a lot of security companies, do we need so many? Well, one of the things that we do uniquely and especially well at NuVector is the whole network visibility packet inspection and all of that. So Tracy is going to show us that in action, you know, how you can actually detect attacks that happen, how we can see the packets and things like that. Exactly. Yeah, I will use a... Is this going to be live or just a pre-canned demo? Yeah, I'm bringing up a YouTube video and I'm just going to show the video. No, I'm sorry. We'll get some coffee. And I apologize. I can't share my screen. I just installed this BlueJeans. Give me one second. I have to give some permissions. I have to go into system preferences, right? I do. I may have to drop and rejoin. Oh, really? Yeah, insane. I'm going to have to quit and reopen. So if I drop, I will be right back. Maybe we'll answer another question. Yeah, Mike, do you have any questions for me? Well, Tracy is rejoining. Or Cameron. Well, I was going to ask this of Tracy after the demo, but remember a couple months ago there was the issue of the pipeline and there was no energy in the entire southeast of the states and the gas stations were all empty. I forget what company was hacked or whatever, but when you see something like that where a company gets taken offline and they can't deliver whatever services, whether it's raising pigs or delivering energy and oil, is that because they're just not paying attention? Is that because they, you know, I'm not trying to point any fingers at any company, but when something like that happens, is it because they were just not doing their job right from a security perspective, or are there just that many possible vulnerabilities out there that it's just almost impossible to secure them all? Yeah, I mean, it's hard to say in any specific case if there's something that could have been done clearly in the Equifax breach that was an Apache Struts attack that should have been remediated, but was not and exploited. The Tesla crypto mining one was just a Kubernetes console that was left open. So those were kind of, you know, moments and could have been solved by some auditing and configuration management, but but, you know, having said that, that these attackers, these hackers are very sophisticated and they have, you know, very sophisticated means they're always trying to stay one step ahead of the security teams and the technology. So it's a continual challenge to do that. And companies just need to really stay vigilant, vigilant, always see like there's new attack surfaces, there's a new Kubernetes man in the middle attack that can attack the API server and orchestrator, you know, need to, they need to understand that could happen and make sure things like that are mitigated. And then as Tracy is going to show, ultimately, you know that something bad could happen no matter all of your preventive controls that you put in place and scanning that you do. So ultimately, you have to have some type of system that can detect a zero day attack or an advanced persistent threat that's never been seen before and just is going to do some weird things. And you need to be able to detect that and try to block those things. Okay, we had a question come in from I think it came in from YouTube. I'm going to just toss this up there. Tracy, if that's okay, I don't want to make this one came in from John's, John's from home or something, something like that. John, anyways, John says, earlier I heard New Vector talking about what they're not doing to catch the network traffic, but I didn't catch what they are doing to catch the network traffic. What we're not doing, what we are talking about, we don't use EPPF or IP tables or that's what we're referring to, I think. Right. How do I say this? Basically, we are using some proprietary technology. We deploy containers inside the environment. There's four different kinds. One of those containers is an enforcer that sits next to the virtual switch on the host. So we are able to see the network traffic that is inside of that network host off of that virtual switch. And that means we can see all the pod to pod container to container traffic. We can see node to node traffic and that's how we're able to see that traffic. So we act as a tap. We can see the traffic. We can do that deep packet inspection, identify the layer seven protocol, validate that protocol, learn what that traffic pattern is, build a rule around that. And that's kind of how we do that. We don't use EPPF as a source of truth. We're not using kernel shims because that's got some performance implications and it's only layer three, layer four information. You'd have to glom on some additional information from other sources to kind of try to repackage and present kind of what's happening based on settings or manifest or those kind of things. We're using the live traffic to see what's happening right now. In fact, that's what I'm going to demonstrate a little bit is how we're able to do that. So hopefully that's a great, decent enough answer. I think it is. But now you actually got me curious about performance implications of deep packet inspection. But I'll wait till you get done showing us your wares. Otherwise, we could potentially run out of time here today. Sure. And by the way, did you, you did have to bounce out and come back? What was it? Just a system setting in your? Yeah, system setting and wanted me to restart. So I'm hoping you can see my screen now. We can see your screen. Yes. All right. Well, this is my only PowerPoint slide. So I've got to make it good. I might show the architecture as well. But what I'm going to show you is that behavioral zero trust or behavioral based zero trust. So and I'll just kind of explain those operational modes here. And then I'll show you what that looks like. And you'll see it in live in action. So when you deploy new vector into a cluster, we are immediately in what we call discover mode. Discover mode is going to learn all of the network behavior using that deep packet inspection. We're also going to identify all the process behavior that's running on all of those containers. We do use the containers as our source of truth there. So we're able to collect process information and network information. And from that, we build our rules. So these blue lines that you see here, that's going to be similar to the blue lines that we see in our network activity screen. Now, albeit this is a small demo environment, but what you're seeing here is the traffic here. I've got an Istio operator. I'm seeing SSL traffic there. As I look at some of these other lines, I may see, let's see here, I've got HTTP traffic. And then I think I even had some Redis, but we can identify 35 different application protocols happening live inside of a cluster. And so again, that's validating that cluster, or validating that application protocol as well. So we learn, yes, I just a question for you. Sorry to interrupt, but actually I'm not. It's my show. How many different types of traffic are there? Meaning like, I would imagine that there's got to be some standard core set of HTTP traffic, SSL, whatever. How do you deal with like new types of traffic when there's like some new thing that someone makes that has some new type of traffic for lack of a better word? How do you folks find out about that? And how do you deal with that? Usually from customers. Currently, these are the 34 protocols that we recognize. Obviously, that's not all of them. I have no idea how many various application protocols there are. We have customers that have custom, they have created their own application protocols specific to their environments. And I'm sure part of that was for efficiency or speed or performance or even security. So whenever a customer says, hey, we've got this protocol that it's not on your list. Can you add that? We add that. Glenn, you may have additional details on that. No, that's exactly right. I mean, it's important to point out it's more than just HTTP or just TCP. You can see ICMP there because a tax can come through ICMP as well. But as Tracy mentioned, many companies have their own applications and they can define today if we don't add it in the standard product, they can define today. How do they verify that application protocol using our deep packet inspection? And so does Couchbase have their own way of communicating? And that's why you've been saying I'm just like specifically looking at these database vendors. I see Oracle, Couchbase, Postgres, Mongo. Does every database have a different way of communicating? And so you focus this, yeah. And the reason that's important, and this is very key to a difference between a layer seven firewall and a layer three, layer four, is that there's a connection, let's say it's from the front end web or application logic to the database. And if you were just saying, well, this connection can connect on a certain port that the database runs on, you can have kind of an attack that's trying to attack that port. One layer of an additional security is to make sure that the proper application protocol is being used in that connection. And so that's what we're verifying here. And then a third layer of deep packet inspection is actually looking within an approved application protocol for embedded attacks. And that's the thing you would be most familiar with, would be like a SQL injection attack. There's a allowed SQL connection between these two pods, but inside of that, there's any of these attacks that you can see here. Okay. Tracy, sorry to interrupt you. I would just inquiring minds want to know. That was perfect. And I switched in case you didn't notice the, we went from app protocol to the threats that we can detect in those protocols. So this is the network-based threats that we can detect. And the way that we detect those is using the paradigm of zero trust. Zero trust, unfortunately, has been one of those words you were talking about, microservices and buzzword bingo. And unfortunately, zero trust has become one of the spots on that bingo card because, and I kind of, I guess I kind of relate this to DevOps, right? DevOps is really a philosophy. It's a cultural approach to developing software where you're trying to do the things that you can do in development as early as possible, because if you can find issues with it earlier, it costs less, it's faster, yada, yada, yada, right? I mean, all of this is around just being better software developers. And zero trust is a component of security within these. And we use zero trust when you log in, when you use two-factor authentication, when you have permissions on files, yada, yada, yada, right? It's all over the place. The way we're doing and applying that approach of zero trust in new vector is that once we learn this discovery mode, right, and this is live, it just requires that you exercise your application, right? Run the reports, make sure all the services are talking to all the services and click on all the buttons and add and delete things. That's learning that behavior. Then, once we've learned that, and that again, that could be five minutes, five hours, a couple of days over the weekend, then you can turn off the learning. And that is what establishes the zero trust. Now, you need to validate what that behavior that you learn, revalidate that what we saw is all known. Yep, that all looks normal, talk to developers, et cetera. But they don't have to be involved in this otherwise from a technical or implementing this approach. When you turn on that monitor mode, that means that now we're not going to block anything. It's completely safe. And then we were talking about performance, both discover and monitor, we act as a tap. We are not in line with that network traffic. We're not inducing any performance lag anywhere. And so this is kind of free, right? You can turn on monitor mode. You can do all of this within 30 minutes of installing the application of new vector and turn on monitor mode. And now you have zero trust. You can add rules to this. But the important thing is anytime we see an anonymous network connection or process, we are going to alert you on that, right? We're trying to reduce the size of the haystack where you're trying to find the needle into a bunch of bales of hay and a small stack of straw with hopefully no needles. But if there is one, it becomes readily apparent, oh, we need to check that out. Why is this network connection coming from some foreign country that what's going on here, right? So we're trying to give you the ability to see something that you should probably be paying attention to and doing that in real time. Then if you want to turn on protect mode, if you want to block this activity, then you can use the protect mode to not only notify you of any anomalous activity, but also block that activity. So that's using zero trust to get alerts, to block or to protect your environments. And again, we have customers that are running monitor mode. We have customers in protect mode and anywhere in between. You can do this at a very granular level. So that's that's basically what I'm going to demonstrate. I'm going to demonstrate this that we've learned this, how we can alert and how we can protect in this cluster right here. I shall pause. Any questions or thoughts? I have, I have other questions, but I don't want to interrupt you. So I'll wait, I'll wait till you're done. But if anybody else has any questions on YouTube or Twitch or any other place, do drop them into chat and we'll make sure we get them addressed. I will try to make this very quick and painless. We're going to focus on one container, this container right here that's called struts. And the details around this container is currently in a monitor mode. I have some network rules that I've learned. So this application, it's very simple little demo, just running this little super app container. It's got a web front end. It has a load balancer. This is running in the cloud on GKE. And I'm in monitor mode because I've already learned, so I learned earlier this morning, our HTTP traffic to that. And I've also learned some process rules. So if I'm going to go to my groups, got to move a window out of the way. So here is this container. There's the processes. So it's a Java app. Paws is a Kubernetes native process. So this is the processes we've learned this morning when I was in discover mode. And also network rules. We do have file access rules. I won't get into that here. Mainly we're just going to focus on the process and the network rules. So this application is running here. I can create a new order and we'll call this for Tony. And I'm just going to give it some numbers, wrong keys. Everything's working on this app, everything in normal. And since we're in monitor mode, if anything anomalous happens, we're going to alert on that. So let's do some anomalous stuff. Very first thing I'm going to do is I'm going to bash into that container. And I did preload my commands here to make sure I'm in the right environments and all that kind of thing. So I'm bashing into this struts container. I'm in the container. I can do an LS. I can run commands. If I come back here, remember we're in monitor mode. We don't block in monitor mode. So if that's anomalous activity and bash and DU and LS are not listed here, is NuVector going to tell me about those? So let's go to our security events and see if we saw anomalous activity. And we did. Here we saw, here for that container, that struts container, we saw that LS, we saw that DU, we saw that bash. We got alerts on that. Well, that's interesting. Let's go back to that container and let's put that container in. Well, you know what? I'm going to do one more thing. Before I move it to protect mode and block things, let's exit and let's attack that with the struts attack that Glenn was talking about earlier. I can execute a Python script that is going to hit this IP address 32182. This will be gone after this demo. So there's our IP address. And I'm going to hit it with basically a reverse shell. I'm going to try to run this NC minus NV with my IP address. Leap is going to be my port reverse shell there. So let's run that. Okay. I got a response. Now, while that didn't actually give me a reverse shell, I did get a response from that container. And that is exactly what a hacker wants. I want to know that I can cause a reaction that wasn't expected. Now I can delve deeper into this exploit to see what's going on there. What did new vector have to say about this? And our security events, when we're in monitor mode again, we don't block. So what we saw there is Apache struts remote code execution. We saw this on that container. We were able to identify that type of attack. We did an automatic packet capture. So this is unique to new vector where you're able to do live packet capture automatically. As I scroll down, you can see all of the stuff that's in that packet. So you can download that PCAP file and put it into a wire shark and do some analysis there. Also notice down here process profile. There's my NC. There's my shell. So all of those things were captured. Network activity and process activity. We don't want to let that happen. So if I were to go back to that container and let's put that container in a protect mode. And I'll do this pretty quick. I'm going to move it to protect mode. I'm going to jump to my terminal window and I'm going to execute those attacks pretty quick here. So I get a confirmation that yep, we're in protect mode. Let's go back and let's run that exact same command. And it's not coming back with anything. The reason why it's not because one, we're in protect mode. So we are blocking in protect mode. If I go back to my notifications under those security events, we'll pay homage to the live demo gods. My gosh, it worked. No, we were confident. So there is that Apache struts remote code execution. And notice we still did the packet capture. But this time it's deny. So we denied that network connection from even happening, right? So that is network layer security. Also notice there were no other NCSH, none of that happened. No processes were allowed to execute because the connection was not even made. In fact, did I time out? Yeah, this thing just timed out. Interesting. So if I'm blocking based on network, what about those bash commands that I ran just a moment ago? I can't get there either. That's interesting. If I come back here and do a quick refresh, and this is kind of the process that we see. So there's that bash, right? And we block that. You know what? Let's allow bash just for the sake of this demo. I'm going to deploy that rule. That rule has now been deployed. If I come back here, so now I should have an allow rule on that container. Any processes there? Oh, that's the processes that we were seeing earlier. I'll go back here to my group so that you can see that for that container, the process profile. So see it just added that bash to my processes there. I'm still in protect mode. But it looks like that maybe now that bash should work. And it does. Can I do my LS? No. Can I do my DU? No. Why? Because again, that is not learned behavior. So we're blocking those inside of the container. If I were to delete this particular bash process from that container, and I'm in protect mode here, and I exit and try to go back in, I'm going to get blocked again. Let me show one last thing because I want to show the actual discover of how we do this. So I'm going to move this now to a discover mode so we can learn those processes real time. You can see exactly how fast this happens. I'm bashed in. I can do LS. I can do DU. And when I come back here and I hit refresh, there's my bash. There's my LS. There's my DU. So we learn that fast, and we can then make those rules that fast. And then I can export those rules. And I'll like say, I'm going to go to a different environment. I'll put them in monitor mode. We can automatically create the YAML file so you can replicate those security policies across multiple clusters, basically cloning your zero trust perimeter. And now you can do the exact same segmentation across multiple clusters. So now you've got the exact same size haystack to find those needles to look for that unknown behavior. That in a nutshell is what we do and what we do that is so unique. Making it easier to get granular security without having to do manually create these files, manually creating network security policies that are only at layer three and layer four. This makes it easier to build really good security early in development and not adding a bunch of burden and toil to your environment. Hey, Chris. I was going to call you Chris. Your name is Tracy, right? Last time I talked. Yeah. Zachary from YouTube had a question here. How does, and actually this is something that I was going to ask you slightly different, but Zachary's question is the following. How does New Vector zero trust approach work with Kubernetes network policies? Great question. We do not conflict with Kubernetes network security policies. Those are being done inside of Kubernetes. They're at layer three, layer four. There's not order to them. There's a few advantages to our network security policies. You can see here they have IDs. We add to the top so that basically at the bottom of this list is a deny all that you would have to match up on a network policy. But we're doing this independent. These policies are created and built and enforced by New Vector. We're not a skin on Kubernetes or OpenShift, so we're not suggesting there's some security tools that will suggest a layer three, layer four policy that a user can implement. In this case, because we're doing layer seven and Kubernetes doesn't do layer seven rules, so we are independent of those. Now, if you've already spent a lot of time creating those Kubernetes network security policies, don't toss those in the trash. You've already spent time defining the paths, which is essentially what you do. There's no deny rules. It's all allow rules. I think that's right. All allow rules for Kubernetes security policies, or maybe they're all deny rules. I think they're all allow, but there's no deny. But anyway, we can do allow and deny, and because we're independent, we can then verify using our rules and our network inspection to validate that, yes, all the traffic is using those rules, and we're not seeing anything outside of those. And that becomes the big advantage in those environments. You may already have some network security policies, or maybe you're partially covered with network security policies. We give you the ability to cover all of that gap for the known and the unknown, whether or not you're using those. You don't have to change them, continue to use them. Kubernetes will still use them, but this will give you the analogy I like to use is if you've already put locks on your data center, now we can be the video camera on those doors to making sure that nobody put tape over the lock and is going in and out of that door, even though you put the lock in. So we can be a verification and validation of the good work that you're already doing, and we're not managing those, and you could potentially have some conflicts, but our learning is 100% accurate. So if you've defined rules, we're going to be able to validate that those rules are being followed, and there shouldn't be conflicts based on that. Did I miss anything, Glenn? That was a... No, but I would say we do help customers convert those ultimately policies into a new vector. Most of our rules are a superset of that. So we just build ours by default at layer seven. So at that point, if it's a similar rule, there's no need to have a network policy, so you can just get rid of those. But if you have specific things like egress rules or namespace-based rules that you want to maintain and have a need to maintain, then we can help you convert those into a new vector because you can do the same thing in new vector. The thing you don't get with network policy is all of the alerting, packet capture, forensics, visibility, SIM integration, all of the stuff that security teams need to actually know that there's an attack happening. Exactly. Tracy, a question from Mike Wait here from New Hampshire. How does this work? Meaning like, I work with lots of software companies. Some software companies have a software as a service where everything's running inside their infrastructure and you just connect up to it to get whatever you need. Other companies who have like APM, you know, performance management, there's agents that have to be deployed on each node that then phone home to the mothership. How does this work? Do you have agents? How does it work? And then the second part of that question, if I can ask it quickly, is how does that impact performance of the overall networking infrastructure? What a fantastic question. We have four minutes left. Oh, I'll make it quick. We do not use agents, agents that are talking directly to Kubernetes API, slows down the API. We've seen that firsthand because we just completed performance testing with one of the hyperscale cloud providers where they were doing performance testing of new vector versus other tools using 500, 750, and 1000 node clusters. No, I did not misspeak. 1000 VMs running in a single cluster and then doing performance testing on that. We were the only product that was able to complete all of the performance testing successfully in that kind of an environment. And the reason we can perform that well is the following. We deploy as containers. This controller is the only container that is communicating with the Kubernetes API. No other container. We don't have 100 agents or sidecars or anything like that. We're not using the sidecar overlay. We're inside of the cluster. So there's four different types of containers. This is handling our APIs. The scanner container does our scanning very fast. I'll emit some details there. The web UI making API calls of the controller. So everything that you were seeing me doing here, those are making API calls of that controller. Also I have a command line tool interface built in. Then this enforcer, so this is what we're using to see that traffic and collect those processes. This enforcer sits next to that virtual switch. So that's how we're able to see that network traffic and inspect and validate that network traffic. This is one of our seven patents, this positioning and how this thing works. And so that's how we're able to do that. That's why we don't use agents. There is a single enforcer on every worker node. So in a thousand node cluster, we would have a thousand enforcers. And all of those enforcers would be talking to these controllers. We did have to beef up the size, all of these default at one VCPU and one gig of RAM, VRAM. So on average, they're very fairly small containers. But when you get to 250, 500, 1000 node clusters, you may have to beef up your controllers a little bit because they are doing all that talking to all of those enforcers, etc. Hopefully that answered that question in less than four minutes. Two minutes to spare. I could go to single word answers very quickly. Sorry. Yeah, we got to have you back. We have a podcast show that we do. It's called Behind the App. We're actually moving it right now into the Red Hat Corporate Fold. I started it about, I don't know, six years ago. It used to be called the Red Hat X podcast series. But maybe you folks can come on and do more of a really technical deep dive on our podcast series here the next month or so. That would be fun. As it turns out, as it turns out, I happen to have a, whoops, this is always embarrassing. Oh, here we go. So what's next? How do we get in touch with you folks? How do we find out more about New Vector? And here's the closing slide for the day. Where can we see you guys? Are you going to be at KubeCon? By the way, we're planning on going to KubeCon out in Los Angeles, but I was talking with our HR team yesterday and it looks like that might actually get switched back to being 100% virtual. But how do we find you folks going forward? Yes, newvector.com is obviously the best place. I'll give you Tracy's cell phone number and put that in the chat. And then KubeCon is obviously a great place to just meet. I think Tracy will be there so you can get a live hands on there as well. And we have a number of publications on the Red Hat site. We do, we have several podcasts. I think I did one earlier this year on vulnerability management. So totally different topic. Yeah, that was in March. The topic there was vulnerability. So many ways to reach us. And of course, as you mentioned, Mike, we have a certified operator as well as certified containers on the Red Hat site. Well, excellent. Well, before our producer starts sending me any more messages on Google chat, I'm going to have to wrap this up for the day and like to thank you, Tracy and Glenn, again, for being our victims here on the OpenShift Commons Briefings Operator Hours. And thanks everyone for coming and we'll see you next time. Thanks for having us.