 Good morning, good afternoon, good evening. Wherever you're hailing from, welcome to another DevSecOps is the way here on Red Hat Live Streaming. I'm Chris Short, host and showrunner of Red Hat Live Streaming. I'm joined by the one and only Dave Muir. Dave, how are things in your neck of the woods today? Things are hot, man. Oh, well, you do live in Florida. Yeah, that's true. We've had a couple of skirts of some tropical storms. Good old Frank or Fred, whatever his name was, and then Grace, but we're doing good down here. So yeah, good stuff. We've got a great show today, I think, for everyone. This is obviously part of our DevSecOps is the way series that we do monthly. August, can't believe it's already August, right? You always say that. But August is Network Controls Month and got some great guests. Who better to talk about Network Controls than people from a company that has network in the name, right? So I'll let them introduce themselves here in a second. But before I do, just wanted to talk about our series really quick. For those of you that don't know, we do a monthly series. We actually have two Red Hat Live Streaming shows. This is one that's more on sort of thought leadership and about that topic that we're talking about that month. And then we also have a second one that's aligned to the OpenShift Commons Briefing, Operator Hours show that we feature partner technology alongside Red Hat. You can see the months, we've got a couple of months more to go in terms of the topics. This all relates to a framework that Red Hat has created around DevSecOps with all these different categories and we've got different functions that tie to those categories and how you plot those features and functions to a DevOps lifecycle. But we also do podcasts. We try to publish a couple of blogs every month on that topic. Sort of find out more, you can see all of that there. Go to some of those links that you see on the bottom there. So I think that's it for the intro for today. We're gonna talk a lot about networks and microsegmentation. We might throw out a buzzword here or there too, like zero trust, which is one of my recent favorite ones. And then we'll do some demos to show some of that stuff. So I think it's gonna be a good session. So with that, let me stop sharing and I will let our guests introduce themselves. We've got Arif from Palo Alto Networks and Alex as well from Palo Alto Networks. Sorry, why don't you go ahead and introduce yourself to our audience and let us know where you're from. Sure. Hi, everybody virtually, I suppose. My name is Arif Lohak. Director of Products at Palo Alto Networks part of the Prisma Cloud team. Actually, I joined Palo Alto Networks through an acquisition of a company called Apparetto. And we're gonna talk about the technology that we built at Apparetto, which is now part of Prisma Cloud. Alex, go ahead. Hey everybody, Alex here. Glad to meet you all. I am the senior marketing technical engineer in the Palo Alto Networks Prisma Cloud team. I'm responsible for the microsegmentation model and I will be running the demo for you today. And while it's hot here in Florida, Alex is south of the equator, we're just talking about that. It's a little cool down there in Brazil, eh? That's true. I am living the countryside of Sao Paulo in Brazil. It's a two hours drive from Sao Paulo city itself. We are in the winter here in Dao from the equator and right now it's a really hot day. Last week, it was really cold, almost zero degrees Celsius, which I have no way to measure in Fahrenheit. It's not, can't work my way around between Celsius and Fahrenheit. Sorry guys, but it was pretty cold. And right now it's probably a very warm day we had in the last probably three or four weeks. It's around 30 Celsius, which is hot. That is pretty warm. Yeah, that's really hot actually. Yeah, that's kind of crazy for winter, yeah. By the way, zero Celsius is the only conversion I know to Fahrenheit, that's 32 degrees. That's right. Other than that, I have to look it up. And then, Ari, you're from the Bay area, right? That's right, we're from the Bay area. Yeah, I live up in Danville, but work out of our headquarters in San Clara. Yeah, we were just laughing about that because I'm also from a Bay area and whenever somebody says the Bay area, I was like, well, you're not VB Bay area, right? There's Tampa Bay. And by the way, San Francisco has like five different bays. That's true, yeah. But you said you're from Apparetto, what did they do? Yeah, so we were a startup that we got acquired end of 2019, we joined Palo Alto Networks officially January of 2020. And yeah, so we're essentially building a network segmentation solution and that's very different from sort of what was available out there and that sort of goes into sort of the impetus of behind Apparetto, why was Apparetto started and what were you trying to achieve? So the founders come from a very network security focus and actually network infrastructure focus background and one of their realization having worked at large networking companies has been security, specifically network security has always been tied to the underlying network infrastructure but the shift to cloud is making that it creates, makes it very difficult if you tie it to the underlying network infrastructure because A, you don't own the network infrastructure in the cloud, you're using somebody else's network infrastructure and it could be multiple network infrastructures you're having to deal with things in the cloud on prem, right? That's one sort of trend. Another trend was containerization with Kubernetes. Now this was still early on, right? When the founder started the company, Kubernetes was still very early on but obviously they had the foresight that this was going to be the technology that where infrastructure would eventually go and with containerization, pods are coming and going. Things are continuously auto-scaling, right? So to rely on things like IP addresses and ports to define network segmentation becomes very difficult because a pod is allocated an IP, it's deallocated and reallocated against another pod. Something is sitting in an auto-scaling group sitting behind a load balancer and as you're scaling your application it's automatically scaling and new IPs are getting allocated. So relying on IP based mechanisms to solve a network segmentation problem, their thesis was it's just not going to work for the future and they need to rely on something much more application centric, much more workload centric. And that's what we call the workload identity, right? I can talk more about that sort of like, you know how that works and how we actually thought about workload identity, yeah. Yeah, we'll definitely dive into those details, cool. So I wanted to give Alex an opportunity as well to just mention how long he's been at Palo Alto Networks a little bit about himself as well. So Alex, how long have you been at Palo Alto? Sure, and thanks for the question. I've been at Palo Alto for two and seven months, two years and seven months. I've started as an SE covering Latin America on cloud security. I then transitioned it to be an SE looking at financials in the West covering like large financial institutions in the global team. And recently I moved into the product management team working with Aerofo as a marketing engineer covering the market segmentation. So it's been a fun ride covering customers on all different sizes and all different verticals with different challenges on adopting cloud and adopting cloud native controls. Cool, awesome. Well, good stuff. So Ari, if you mentioned, you know obviously micro segmentation and Palo Alto Networks I'm assuming Palo Alto looked at the market and said, wow, you know, Kubernetes cloud this is gonna be a tough problem to solve and hence why they looked at Appareto for that purchase. Is that how sort of transpired or sure there was more to that? Oh yeah, I mean, there are lots of stuff in the back end but yeah, I think you summarized it very well in terms of recognition from near and we in the case that the transition to cloud and transition to Kubernetes essentially requires us to rethink about a problem like network segmentation and micro segmentation, right? And how do we kind of solve that problem specifically for these cloud native types of environments? Absolutely, that was impetus behind the entire discussion. And yeah. Yeah, I mean, Palo Alto has been around for a while, right? How long of that? Yeah, you got me. Definitely more than 12 years, I would say. Yeah, yeah. I think it's probably 14 years in red, I feel. Yeah, yeah. And so yeah, I can imagine they're too very well obviously in the legacy type network controls but as Kubernetes has been gaining popularity it's a whole different ball of wax. I mean, you can imagine, as you mentioned just thinking about infrastructure and controlling that. And if your network strategy is tied to infrastructure you're never gonna get very far with Kubernetes, right? Yeah, yeah. Well, so how does Appareto now fit into the overall Palo Alto product strategy? Where do you guys sit basically? Yeah, it's a good question. And it kind of plays to the overall Prisma Cloud strategy I would say, I mean, our goal at Prisma Cloud is actually to be a platform for security services and various aspects of security for the cloud, right? There is no, as we say in security there's no silver bullet. You buy one product and you get everything that you have to kind of think about it with defense in depth, right? So obviously Prisma Cloud as a platform has visibility compliance and governance capabilities, cloud workload protection, even we just recently announced identity access management for permissions management for cloud assets made an acquisition around shift left security and the bucket that we're in is cloud network security specifically focused on network security for cloud, right? And that's the overall strategy for us. We have various pillars we call and our focus with Aparato is the cloud network security bucket bringing in the micro segmentation technology, yeah. Okay. What are sort of some of the main things to think about our key items as you start to think about micro segmentation in cloud and containers? Like you should think about A, B and C. Do you have a list or is it just, deploy certain products? No, look, yeah, we definitely from my experience definitely sort of gathered some of the critical things that customers, what I see customers require. I think first and foremost, you can't segment what you don't know you have, right? That's the first thing that we always hear from customers as they begin their segmentation journey which can be for many reasons, right? It could be compliance, governance, making sure that protect their IP. There's lots of drivers for network segmentation but coming back to the point, you can't segment what you can't see. So visibility is obviously the first step. And in fact, most customers start their journey towards network segmentation with visibility only. No enforcement of controls, no enforcement. So essentially being able to get that very quickly, be able to deploy the technologies agent based as in most of the technologies in this space are agent based be able to deploy these agents very quickly and get that visibility. And the second thing that I also came to the recognition that having talked to a lot of customers and operationalize this technology is you wanna segment but at the same time, you have to do it in a way that it fits into workflows that cloud people are familiar with, DevOps people are familiar with. And what I mean by that is you don't, we've had an inclination to rely on centralized ticketing systems quite a bit in the networking industry. I wanna be able to, I have to open up a ticket to allow a rule that says, okay, allow this IP, allow this port, right? And that's what we call a people and process problem. And that's one thing that we also actually also solve. And we solve that by essentially creating guardrails. So you have the right guardrails that centralized security teams can set up and then you give some ownership to your DevOps teams and your application owners with the visibility combined with that visibility for them to write their own sets of rules as long as it stays within the guardrails. You don't have this problem of, I wanted to play an application, I launched something new in my stack and now I need to open up a ticket. I can define that as code and it can be part of my application deployment pipeline, right? Yeah, those are some of the main things that I would say, like, I've kind of learned over time because anytime you deploy new technology, you wanna do it such that it doesn't hamper what you're really trying to achieve, right? Yeah, very, very DevOps-ish with everything as code, right? I'm assuming there's a lot of sort of code and type of GitOps type stuff that customers should think about as they can deploy micro-segmentation across their clusters or network. So cool. That's good, yeah. I mentioned a buzzword earlier, zero trust and actually it's something that I've been hearing now on a daily basis, it's a real thing, right? It's not just a marketing buzzword, but I'd say a year ago, I didn't hear it as often, maybe not at all. So why do you think we're hearing zero trust more and more these days? That's an excellent question and it's actually what led me to even think about joining Apparato back when I did. Yeah, interestingly enough, like if you kind of think about enterprise security and access control defining what needs to talk to what else, you can kind of summarize this problem as you've got your applications, you've got users and you've got devices, devices being IoT devices, right? And if you can solve an access control problem between these three things, what devices apps talking to, devices talking to apps, users talking to apps or apps talking to apps, right? You can solve this problem holistically across your enterprise, you've really solved a lot of problems and with zero trust, the assumption is the underlying network infrastructure is untrusted and you want to rely on end-to-end authentication mechanisms, really figure out who is this asset, be it a user or an application trying to talk to something else, right? And I mean, zero trust in my opinion is sort of a natural shift in the security industry. I think it has multiple facets. It's not just on the networking side. You'll hear about identity players like octa, ping, oct zero, not talk about zero trust. You'll hear about VPN replacements with zero trust, right? So there's a lot of aspects in the zero trust industry and it all boils down to, I think, the three things that I, three actors that I talked about. And so when we started Apparetto, the initial problem statement was to solve this app-to-app problem with zero trust. Essentially, be able to really identify the two endpoints, assume the underlying network infrastructure is untrusted and be able to authenticate, trust but verify both those endpoints before allowing that communication, right? And that's really how we think about it. And I just to answer your question, how I think the industry is evolving, right? And one more example I'll give you and I always think this is a nice way to kind of think about zero trust. We've been in the IT industry for some time now. 10 years ago, when you logged into your corporate application, very common for us to VPN in and essentially assume that, hey, I came into a part of the IIT space and I should be allowed to this application, right? Now with, you ask, why is this even more relevant? Now with work from home, remote work and remote offices, it's hard to make the assumption, you're coming in from the internet, how do you identify this user now? And we've done this for some time now in the user space. You authenticate yourself against an identity service, the identity service unbeknownst to you is giving you some identity context through your browser and then you're presenting that to the application. The application makes a determination, who is RF and who do you have access to this application, right? We solved that problem in the user world. We've done something very, very similar in the app world because we think in the app world, an application workload behaves very similar to a user. It can pop up anywhere, you should, it can be across multiple networks, those sorts of things, right? Yeah, it's funny, I read an article once recently about what zero trust is not in the first one was, I have a VPN and you're right, because anybody could, well, not anybody, but you could sign up to a VPN but get access to all these other resources and assets that maybe you shouldn't have access to. So have you seen an increase in your customers asking about zero trust because of the actual executive order that was released a couple of months back? I think it was mentioned in there like 11 times. Yeah, I mean, we certainly saw the increase given prior to that, but I think that's just fueled it even more, I would say. Clearly, obviously, when you have an OMB mandate, I remember maybe I'm dating myself a little bit. When we had the IPv6 OMB mandate, everybody was like, oh, we gotta have IPv6 support, right? I used to work for Juniper Networks back then and we were like, oh yeah, all of our technology needs to have IPv6, right? But yeah, you're right. I mean, that's certainly that's something that's gonna fuel zero trust even more but we certainly saw it even before that. I think more so with remote work and then also with on the network security side, just things being much more ephemeral, right? Yeah. Cool. So how does Prisma Cloud approach this uniquely? How do they help customers gain zero trust? Yeah, I mean, look on the, I'll start with the networking side focused on zero trust network security and we'll delve into the other areas. On the networking side, with the addition of the micro segmentation capabilities, we're bringing into Prisma Cloud all of the things that we built around workload identity, authentication, authorization, that's really the crux of the technology that we've added to Prisma Cloud for zero trust network security, right? And so that's certainly an area of investment for us. There are other areas where we think about zero trust and those have to do with things like permissions, lease privilege permissions. So when I talked about identity access management, the IAM capabilities that we have, that's another area of investment for us making sure that because you wanna give lease privilege, you want to figure out if this asset should actually have this privilege to access something else. So that's another area of zero trust that we're investing in. Cool. Okay. We do have a couple of demos or at least one, right, set up. Do we wanna talk a little bit more about what we're about to see or should we jump right in the demos? What do you think? Yeah, Alex does a pretty good job but I'll just set it up for him a little bit. You know, what we're gonna show you is essentially the integration of our technology, the app-reader technology into Prisma Cloud and what is now the Mark's Education module and all of the capabilities around that. So essentially the general availability of this product just to show you a quick demonstration of what we're capable of. Alex, take it away. Thank you, Ruff. So let me share my screen. Suck. Screen sharing, hold music or something. So the question of the year, can you all see my screen? Yes. Okay, perfect. So just to get started, what we have here is the solution deployed in a Kubernetes cluster. It could be like Kubernetes could be a virtual machine running like Red Hatch or Windows, right, OS or any sorts of different Linux flavors. So what we have here is just being Forcer which you can think about being Forcer as an agent that is deployed in your OpenShift cluster or your Kubernetes cluster, just running like a demon set. So I have six enforcers here just because of my cluster, I have six nodes. So it's one agent per node which you're doing all the microcelementation model. If we're running, we were talking about the VM, right? It's just a service running that VM. And that let me just jump into the UI. So here in the UI, right? If you are familiar with Prisma Cloud, right? That's just a Prisma Cloud tenant where we have the CSPM capabilities, right? The compute capabilities and you have a new model which is exactly what Erofo mentioned, right? The microcelementation model which is under this network security tab. If you don't have this, if you are a Prisma Cloud customer today and you don't see this tab, it's just because this model is pre-GA will be GA in a couple of weeks and then you'll be available across all customers and all tenants. But you can actually test the technology today if that's on your interest. Then going back to the environment. But here what you can see are all the pods I have inside my namespace. I have a specific namespace. You could have visibility across all of your environments and you can deep dive into specific applications. Here I just have one application which is running in my cluster and you can see all the traffic relationship between the pods. You can see that our pods that are connecting to DNS servers, pods that are connecting to specifically Google service, pods that are connecting to the metadata service and pods that are receiving traffic from the load balancer, for example. I can actually click on the pod itself and have visibility on all the metadata that we extract of this service. As Erofo mentioned, right? We are identity-based, which means we are collecting metadata out of every single resource we are protecting, generated, assigned, and cryptographic token that is uniquely assigned to that instance. So when this instance is now trying to communicate it to something else, we are doing authentication, right? Is this front-end pod exactly who it claims to be? And then we're leveraging these, right, sign-in tokens to do that. And then if it's authenticated, it's also authorized. So you can see that we are collecting operational system data, cloud provider metadata, application metadata itself, and custom tags that you as a user can be assigned to your pods or VMs. Right, after I do that, I can actually inspect the flow and see what exactly is happening here, right? So I can see all the flows and all the traffic and the policies that are allowed in this traffic to happen. So now I have a policy that says, oh, I can access the front-end components and what this policy actually says, right? It says that traffic coming from the load balancers in TCP80 and TCP443 or different ports, right, are allowed to reach out to my application. Important to mention, there is zero network constructs in this policy. We're just leveraging metadata. And you as a security engineer, as an application owner, you can think about this. I am now defining what it's allowed us to be in my application or in my subnet or my entire cloud environment, based on compliance checks, based on security policies that I have or business controls, right? For example, front-end can access back-end. So the application owners can now create their policy just using the metadata they already used today, like all those labels. So front-end equals allow it to access back-end. I don't need to leverage networking, IP address, things like that, which may be alien to a developer, right? They're not versed in network security. So now security teams can define the guard rails, for example, right? No application can talk to each other between namespace, which is exactly what happened in here. I have this namespace, which has this right name of road, which is trying to reach out to services in my application. So he's trying to escape his own namespace to access different namespace. And now he's been blocked because I have a policy created by the security team that defines no traffic between namespace can actually ever happen. If they need to happen, they need to have a business reason and then we're gonna have a policy that explicitly allows that. But by default, different than what Kubernetes does, not allowing namespace to talk between them. So then I have all this visibility in my pods and understand the traffic relationship and see what's going on there. Another interesting aspect of it, it's rather than just going to the why and creating these policies manually here, how can I leverage policy as coach to give power and moving from a centralized work, like Arifu mentioned to a decentralized work where DevOps can write their policy as code and have them deployed in the pipeline directly. So what I have here is just two pipelines, one that creates an application in my cluster. And I'm using like CloudBuild to do that, but it's really just the tool, right? It could be anything else, could be Jenkins, could be Quake, could be code pipeline, Azure DevOps, et cetera, GitLab, et cetera. Then I will deploy this demo app and alongside the application being deployed in the cluster, I'm also adding the policies as code, right? It's just a demo definition that dictates exactly what policies should be allowed in cluster. What I'm gonna do, I'll just run this and you'll be able to see that once the application is deployed in Kubernetes, I also automatically have the micro segmentation policies created on it and you'll be able to see here in my cluster a new namespace called the demo app, we're gonna be able to see this application. Just to give you a different view why we wait for the pipeline to run, I can actually have visibility not just across just one application as I initially mentioned, now I can have the visibility of my entire cluster and all the namespaces on it. So you can have visibility that going from the top, I'm actually seeing my entire data center, my entire cloud account or you can deep dive and just have visibility across a specific application, right? So you can see all those applications existing here, right? And we can actually see that my demo app is already created. Just let me check if it's finishing. It's just finishing the deployment. So the namespace was already created inside Prisma Cloud, you're just finishing the deployment in the cluster. So things are just happening as we speak. I can actually click here and take advantage that the pipeline is still running and just show to you guys one example of a demo definition file. So here I have one policy. Which I need to find out. And here, allow internal traffic. So let's just look at this definition, right? As you can see just a demo file that says allow internal traffic between all the pods inside the namespace, right? Very simple one. So I'm just saying that allow everything that happens inside my namespace in any ports. I can actually write granular policies as I want, right? Just leveraging metadata, like front-end equals back-end, right? Front-end can access back-end on a specific ports. Here I just have a simple policy that says inside my namespace allows everything to run. And that's one example of a micro-sumentation poly written as code. See, things were done. Things were done here. Let's go back to Prisma Cloud. And now check my new demo app. You can see that I have the demo app already deployed and I have policies in place. So I can click here in the redis and actually see what are the policies that are tied to it. And you can actually see what is interesting. My default action is reject, which means anything that is not as pleased to allow it, it's going to be rejected by default. And I can see that I created the policy that just the one we already saw in the EMA that is allowing traffic between processing units. We're just resources, workloads that are protected by the enforcer allowing traffic between those PUs inside my namespace. So here we can leverage the UI if you're more comfortable with graphical interfaces to create our policies, or if you're more interested in moving into DevOps way and doing things as code into the pipeline, we support both methods. With that, let me just send it back over to you guys. And I hope this demo was helpful to you to understand a little bit what the Identity-Based Microson Segmentation solution is. Yeah, Alex, I actually was gonna have you keep that up. I've got a couple of questions, if you don't mind. Probably some dumb questions. Of course, no dumb questions. Give me here. Well, you mentioned the authentication happens through tokens. Where do we see that? Like how would a developer associate the token with whatever service they wanna, you know, connect to another service? That makes sense? Alex, let me start. You wanna put the rest of that there for me? Yeah, let me start with that. If you go about, Alex, if you just go to the, one of the workloads, if you click on one of them, and then if you just go to the root set text, yeah. So Dave, the way it's working, and I was gonna, you know, I didn't know at one point to dive into this deep detail, but I think part of the demo is a good time. So what's happening here is this specific workload has these tags associated to it, right? These key value pairs, right? That's what we kind of think about them. What's happening, you know, when this pod tries to initiate a TCP3 way handshake, when he tries to initiate a connection with something else within that cluster, another safe for instance, another pod. What we're doing is we're taking this identity information, these tags, these key value pairs, and we only take the relevant ones, the ones that are most relevant to, you know, to enforce a specific rule, and we're actually exchanging it as part of the three way handshake. So with TCP, there's actually a way for you to insert payload. It's called TCP fast open options, and you can insert some payload information that actually has these tags, and the token that you highlighted, what it is, is actually a JSON web token. So we've taken concepts in the web world. Remember if I talked about the fact that users are authenticating themselves against an identity service and then logging into an application? Well, it turns out we use JSON web tokens for HTTP. We've just taken that concept, brought it down to TCP, and we use the JSON web token that actually has these key value pairs as information within that token, and it's signed. So it's, you know, cryptographically verifiable. That's how you authenticate. And it's, that exchange happens as part of the three way handshake. So we can authenticate both endpoints and then allow the communication to happen based on a rule. Yeah. Okay, that makes sense. How are these tokens, how are they generally generated? They are generated on the host by the enforcer agent. So what will happen is we will read the metadata that's available to us through the QB API to the Docker API to other information that we have access to, and we will generate this token, the JSON web token on the enforcer agent. Yeah. Okay, cool. Another question is around the policies. What are you saying, I guess, in terms of customers or recommendations? Who first creates those? Do security professionals work with developers to create those? Or there's maybe a company sort of bag of templates or whatever, and then developers pick from them. What have you seen? Yeah, it's an excellent question. And I'll let Alex chime in on some of the things we're also looking to improve here and things that we have today. So our assessment from most customers today that look at technology like this is this centralized security teams want to have the guardrails that they want to put in place, which is perhaps internet access is a guardrail rule. Like I will only allow, it's all these centralized security teams will allow access to things on the internet. That could be one guardrail that they put in place. Other guardrails, like as Alex mentioned in this example are things like namespace isolation. No communication between namespaces, only communication allowed within that namespace and you can define those rules. So the centralized security teams want to define the guardrails and then they want to be able to offload some of those rules to the development teams. That's where we see most customers love this approach of using namespaces and hierarchical rules to enable developers. We certainly see some organizations where everything is being defined by centralized security but I think as they make the transition towards more dev ops and DevSecOps types of deployments and operationalization mechanisms they tend to favor this type of approach. Yeah. Yeah, I think I can change. Alex, if you want to talk about some of the additional things that we'll be doing in terms of profiling and making it easier even for the centralized teams. Yeah. Yeah, I think there it's right as you mentioned if there is all kinds of deployments and all kinds of environments. So some customers, they really like to work with the template mode as you mentioned Dave Lowe. So they have their set of templates that the security team writes and then the devs will use it accordingly to their needs that are people who deploys the application in the pipeline. They have one of the tests is to profile the micro cementation policies and then define writes their policies based on the requirements they find out doing the UAT test and then they write this template and it's deployed when it goes into production. So they have a testing phase where they create all the policies and then they have the deployment phase where it goes into production and there are environments where people really just use centralized controls, right? For example, namespace segmentation or isolation and then the devs will leave with those restrictions. So I think the important piece is we support all those different types of deployments and different types of methodologies to work with that are things we are working on in the products to make this even simpler, right? And two specific policies, which one is called what we define as policy suggestions, right? So assuming a scenario where the user is just getting started with the technology it's starting his micro cementation journey that is a lot of things he needs to do and he needs to understand. So how can I get better users of the platform without being an expert myself? Like how can I start taking the leverage in the platform in day one? So policy suggestions are a feature that we are going to introduce very soon in the platform that provides you some recommendations on things you need to enable, right? If you want to do like a namespace segmentation simply click here and then we will create those policies automatically for you. Do you want it to allow traffic to specifically infrastructure services, right? Like DNS, NTP, Windows updates, right? You have the updates. So just click on this policy and then we'll create that for you. And that policy suggestions which is going to be like a leverage predefined candidate policies that you can just click and having them deployed in your environment without you knowing anything about the platform itself. The second one is what we call app profiling. It app profiling is as the name suggests how do I profile all the traffic I am seeing here in my application and automatically generate templates that I can use in my pipeline, right? Without me having to write a single policy. So we will be observing the flows, right? In all those flows we can see here in the app dependency map they are being observed by the platform and based on these characteristics a set of granular road sets which will be the least privilege for your application will be generated as a template. Then you can take this template, deploy using your pipeline and deploy your application completed by segmented from the get up. I can also be as a UI user. Oh, those are the recommendations. That's the granular set of policies I need in my environment. Just push them and deploy them. So app profiling is really about learning about your application requirements and dependency and having the policies suggested that you should be applying there. And then you can export those results and using your pipeline. Gotcha. Yeah, I was gonna ask a related question to that around best practices or recommendations for namespaces. I'm assuming you don't recommend having one namespace for production which I've heard recently a company did one time. But what's sort of the happy medium in terms of how many namespaces do you create per application or per company or per division? What are your thoughts around that? Alex, you wanna go first or? Sure. So name spaces in the Kubernetes definition, right? I always recommend having your applications separated by namespace. So I'm gonna have like for example here I have the Butiki app, right? So it's on dedicated namespace and everything that belongs to the Butiki app exists here in this namespace. I'm gonna have other applications like the demo app are just deployed or a guest book or any other applications payments. They need to be in their own environment, right? And they need to be in their own namespace. So then if I have a requirement to applications they don't should be talking to each other then they are completely segmented by default. I just enable the policy and I get this segmentation right there. I can have the Kubernetes clusters that serves multiple purpose. They can serve production apps and maybe when we find this quite often they also serve testing apps, right? But they are in the same cluster. Then leveraging metadata. I can say any namespace where the pods have this label production they should not be talking to any pods that have this production like this label development or testing or anything like that. Again, the cloud team can define this as a higher level policy and have this spread across the cluster. No application will be talking to each other if they belong to different tiers, right? They are production apps and they are testing apps even if they're pulling the same cluster different namespace completely segmented. If they are in the same namespace that will still be segmented because they're delivering different labels but then becomes more hard for you as an engineer to understand those relationships, right? So I think for in terms of visualization and best practices always separate your applications by namespace. If you can have those applications, right? Those tiers prod and development and staging separated in different clusters, even better if you can afford this then having them is still separated on by namespace and making sure that you're leveraging the proper metadata. In this case, prod, dev, staging, things like that. So it becomes easier for you as a security engineer to write a policy that says production namespace can never talk to development or staging namespace. I'll give you one exam on top of what Alex said. I'll just tell you our entire backend infrastructure for this that you see here is actually based on Kubernetes today. And the way we do it is actually we have a dev and a prod cluster, right? And one of the benefits to this for us has been when we do a prod cluster when you go to the auditor we went through a SOC2 type of audit it makes it much simpler actually to prove to the auditor that who's touching the prod cluster versus who's touching the dev cluster those sorts of things. And I think definitely as Alex mentioned if you have the ability to actually launch different clusters for dev and prod I would definitely I think that's for my experience managing having looked at this as one of the recommendations I've made. The other thing around namespaces, surprisingly Dave I've noticed a lots of organizations today they do app level segmentation for namespaces. So essentially app owners have their own namespace and they just segmented that way. And it's actually worked out even today at PanIT which is our IT department within Paul's networks that's how we do it. We actually segment using namespaces to segment different app owners and I've seen other customers do it that way. Yeah, it might have sounded in my mind it was a basic question but asking that question to you all I think has some interesting implications because as a developer, a former developer when I was thinking about well, how should I create my namespaces? I rarely thought about security and zero trust but when those two things come in when security comes into the picture I think it's a different recommendation because you have to start thinking about well, where do my different services how do they communicate, right? To each other. Another question is another micro word micro services does this just get so unwieldy complex when you bring in micro services into the micro segmentation, you know and you don't get micro crazy? It's a very important point. Yeah, you can obviously you can dig yourself a hole by trying to go really granular but at the same time you can try to figure like really what you need to assess is what is the risk that you're trying to mitigate, right? And my sort of discovery and having talked in a number of customers in this space and actually what we do even to protect our own back in infrastructure. With micro services obviously as you can see here as Alex is showing you a single I think this is the Alex this is a sock shop app, right? From Google. Yes, this is the boutique app. I'm sorry, it's the boutique app not the sock shop. So you've got front end payment services all of these are micro services but the way you can think about it is you can put all these within a single namespace and you can isolate this specific application like anything that wants to talk to the specific application you can explicitly define rules but a simple rule can be anything within this namespace can talk to each other. Anything outside this namespace I'd like to define explicit rules like okay if front end or obviously front end needs to talk to the internet then you define explicitly define those rules but if some other services like for instance you have a dependency on a shared service that you wanna that this specific application has a dependency on then you wanna explicitly define those rules. That's the way I've seen most customers think about it, there's some terms in the industry called ring fencing if you ring fence around a specific application you allow anything inside that application the components that talk to each other and then you explicitly allow communications outside. And I've seen that as a way to kind of solve this problem because you certainly don't want to get too granular because at that point things might just add operational complexity with less benefit. Yeah, good. Well, we have a little bit of time left was there anything else from the demo you wanted to mention or show? No, Dave, I think that was all that I wanted to show exactly how the platform works and how you can write policies just using like known metadata but that was all for the moment and showing the deployment to using the pipeline. So from the audience I hope this was beneficial for you. Yeah, I guess my last question is sort of future thinking and I'm not asking for roadmap but where do you all think micro segmentation zero trust is doing in the future? What are the big things like in your head and your brain in terms of what we have to solve for next is is it gonna be something new other than Kubernetes that we have to worry about? What are your thoughts? Yeah, it's a good question and we're always thinking about sort of how technology is evolving how Kubernetes is evolving over time and how in this instance even how microservices are being deployed. I think one of the things I definitely see a shift that will happen is we're thinking about layer three, layer four network segmentation using TCP. I think as you move to a world of microservices in Kubernetes, you have the concept of services so things get frontended through a load balancer and you start to think about TLS as the next tier and then even above that like start to think about HTTP level authentication authorization API level granularity in terms of defining what should access what others. I think that's where the future is gonna go for a technology like us. In fact, that's where we're making investments. I'll give you a simple example. The next thing we're looking at doing and it'll come out shortly is moving this to a services layer essentially with TLS, right? So considering instead of looking at it from a TCP perspective essentially using mutual TLS to solve this authentication problem but then have authorization on top of it as well. There are many, actually there are a few vendors out there that solve being able to enable mutual TLS for you but we're looking at okay not just looking at mutual TLS for authentication mechanisms but even for authorization, right? So that's where I would see the next step and further to that start to look at API level granularity as well not just being able to say this pod can talk to this pod. You can set up mutual TLS but even being able to define at what API endpoint should you have access to, right? Right, what function of this app can access this API call, get, post, whatever, cool. I think, but they're trade-offs here. One thing that I will mention and this comes again from talking to a lot of customers like as you go further up the stack as you start to think about TLS and HTTP you start to end up being a proxy and so when you're a proxy, you have trade-offs. You have to now you're gonna be in the data path, right? So I think there are trade-offs to make as you make those decisions but there are areas of, there are applications that you need to solve that problem that way, yeah. Yeah, good point. Okay, any other last thoughts before I show the end outro slide here and wrap things up? No, I want to thank the Red Hat team for hosting us and giving us the time to present what we do and share our opinions, yeah. You bet, it was our pleasure. Let me just pop this slide back up here and I'll hand it over to Chris, but again, we have these. Oh, can you see? We're looking at your Raspberry Pi. Yeah. Sweet. You can't see that I am a nerd. Oh, darn it. Let me try that again. Okay, this, this one? How's that? Yeah, that looks like something those slides look like. Right, this is my non-nerd side, PowerPoint slides. All right, so we do have this show every month. It's Red Hat streaming, this is DevSecOps is the way, we do thought leadership like this one. We also, you know, brought on a partner. Sometimes we have Red Hatters to talk about the category of the month. Our other shows focus primarily on partners as well. Look for our podcasts, just search Red Hat Podcast and you'll find the streaming service there. You want to learn more on, and by the way, already for now, I just put the Palo Alto Prisma Red Hat OpenShift URL here. If there's another URL you want to direct folks to, feel free. But I also added the red.htf word slash DevSecOps URL where you can get a lot of information on what we're doing, what this framework is all about and pointers to a lot of these shows as well. I think that's it. With that, I want to thank Ari Finalex from Palo Alto Networks and this great demo and thought leadership around microcementation, network control, zero trust. I'm sure we're going to have them on again in the future because they definitely hit a couple of those other categories that you see there, like runtime monitoring. So we look forward to that and they're a great partner of ours and we look forward to great partnership stuff we do with them. With that, I'll say thank you and I hope everybody has a great day. Any last words, Chris? No, the next show up on the channel is the Stack Rocks Community Office Hour. We'll be talking about EBPF. So if you are curious about EBPF, come swing by and we'll be doing a little one-on-one on that. So please feel free to join us and stay safe out there, folks. Thank you. Thanks. Bye.